WorldWideScience

Sample records for fault management model

  1. Model-Based Fault Management Engineering Tool Suite Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's successful development of next generation space vehicles, habitats, and robotic systems will rely on effective Fault Management Engineering. Our proposed...

  2. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Haenninen, S. [VTT Energy, Espoo (Finland); Seppaenen, M. [North-Carelian Power Co (Finland); Antila, E.; Markkila, E. [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  3. Fault Management: Degradation Signature Detection, Modeling, and Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault to Failure Progression (FFP) signature modeling and processing is a new method for applying condition-based signal data to detect degradation, to identify...

  4. Managing Fault Management Development

    Science.gov (United States)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  5. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  6. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  7. Fault Management Assistant (FMA) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — S&K Aerospace (SKA) proposes to develop the Fault Management Assistant (FMA) to aid project managers and fault management engineers in developing better and more...

  8. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    Science.gov (United States)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  9. Fault Management Design Strategies

    Science.gov (United States)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  10. Fault Management Design Strategies

    Science.gov (United States)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  11. Fault Management Technologies Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Given that SysML is becoming a standard for model-based systems engineering and Integration (SE&I), system health management (SHM)-related models will either be...

  12. Fault Management Guiding Principles

    Science.gov (United States)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  13. Fault Management Guiding Principles

    Science.gov (United States)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  14. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    Science.gov (United States)

    Patterson, Jonathan D.; Johnson, Stephen B.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  15. Formal Validation of Fault Management Design Solutions

    Science.gov (United States)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  16. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    Science.gov (United States)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.

  17. Software Tools for Fault Management Technologies Project

    Data.gov (United States)

    National Aeronautics and Space Administration — System autonomy is a key enabler for satisfying complex mission goals, enhancing mission success probabilities, as well as safety at a reduced cost. Fault Management...

  18. Software Tools for Fault Management Technologies Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault Management (FM) is a key requirement for safety, efficient onboard and ground operations, maintenance, and repair. QSI's TEAMS Software suite is a leading...

  19. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  20. Fault Management Techniques in Human Spaceflight Operations

    Science.gov (United States)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  1. NASA Spacecraft Fault Management Workshop Results

    Science.gov (United States)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

  2. Structural Health and Prognostics Management for Offshore Wind Turbines: Sensitivity Analysis of Rotor Fault and Blade Damage with O&M Cost Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Myrent, Noah J. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Barrett, Natalie C. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Adams, Douglas E. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Griffith, Daniel Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Wind Energy Technology Dept.

    2014-07-01

    Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by the presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine availability

  3. An Overview of Optical Network Bandwidth and Fault Management

    Directory of Open Access Journals (Sweden)

    J.A. Zubairi

    2010-09-01

    Full Text Available This paper discusses the optical network management issues and identifies potential areas for focused research. A general outline of the main components in optical network management is given and specific problems in GMPLS based model are explained. Later, protection and restoration issues are discussed in the broader context of fault management and the tools developed for fault detection are listed. Optical networks need efficient and reliable protection schemes that restore the communications quickly on the occurrence of faults without causing failure of real-time applications using the network. A holistic approach is required that provides mechanisms for fault detection, rapid restoration and reversion in case of fault resolution. Since the role of SDH/SONET is diminishing, the modern optical networks are poised towards the IP-centric model where high performance IP-MPLS routers manage a core intelligent network of IP over WDM. Fault management schemes are developed for both the IP layer and the WDM layer. Faults can be detected and repaired locally and also through centralized network controller. A hybrid approach works best in detecting the faults where the domain controller verifies the established LSPs in addition to the link tests at the node level. On detecting a fault, rapid restoration can perform localized routing of traffic away from the affected port and link. The traffic may be directed to pre-assigned backup paths that are established as shared or dedicated resources. We examine the protection issues in detail including the choice of layer for protection, implementing protection or restoration, backup path routing, backup resource efficiency, subpath protection, QoS traffic survival and multilayer protection triggers and alarm propagation. The complete protection cycle is described and mechanisms incorporated into RSVP-TE and other protocols for detecting and recording path errors are outlined. In addition, MPLS testbed

  4. Active probing based Internet service fault management in uncertain and noisy environment

    Institute of Scientific and Technical Information of China (English)

    CHU LingWei; ZOU ShiHong; CHENG ShiDuan; WANG WenDong

    2008-01-01

    In Internet service fault management based on active probing, uncertainty and noises will affect service fault management. In order to reduce the impact, chal lenges of Internet service fault management are analyzed in this paper. Bipartite Bayesian network is chosen to model the dependency relationship between faults and probes, binary symmetric channel is chosen to model noises, and a service fault management approach using active probing is proposed for such an environment. This approach is composed of two phases: fault detection and fault diagnosis. In first phase, we propose a greedy approximation probe selection algorithm (GAPSA), which selects a minimal set of probes while remaining a high probability of fault detection. In second phase, we propose a fault diagnosis probe selection algorithm (FDPSA), which selects probes to obtain more system information based on the symptoms observed in previous phase. To deal with dynamic fault set caused by fault recovery mechanism, we propose a hypothesis inference algorithm based on fault persistent time statistic (FPTS). Simulation results prove the validity and efficiency of our approach.

  5. Mathematical modelling on instability of shear fault

    Institute of Scientific and Technical Information of China (English)

    范天佑

    1996-01-01

    A study on mathematical modelling on instability of fault is reported.The fracture mechanics and fracture dynamics as a basis of the discussion,and the method of complex variable function (including the conformal mapping and approximate conformal mapping) are employed,and some analytic solutions of the problem in closed form are found.The fault body concept is emphasized and the characteristic size of fault body is introduced.The effect of finite size of the fault body and the effect of the fault propagating speed (especially the effect of the high speed) and their influence on the fault instability are discussed.These results further explain the low-stress drop phenomena observed in earthquake source.

  6. Adaptive Modeling for Security Infrastructure Fault Response

    Institute of Scientific and Technical Information of China (English)

    CUI Zhong-jie; YAO Shu-ping; HU Chang-zhen

    2008-01-01

    Based on the analysis of inherent limitations in existing security response decision-making systems, a dynamic adaptive model of fault response is presented. Several security fault levels were founded, which comprise the basic level, equipment level and mechanism level. Fault damage cost is calculated using the analytic hierarchy process. Meanwhile, the model evaluates the impact of different responses upon fault repair and normal operation. Response operation cost and response negative cost are introduced through quantitative calculation. This model adopts a comprehensive response decision of security fault in three principles-the maximum and minimum principle, timeliness principle, acquiescence principle, which assure optimal response countermeasure is selected for different situations. Experimental results show that the proposed model has good self-adaptation ability, timeliness and cost-sensitiveness.

  7. Fault Management Practice: A Roadmap for Improvement

    Science.gov (United States)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  8. Workflow Fault Tree Generation Through Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2014-01-01

    We present a framework for the automated generation of fault trees from models of realworld process workflows, expressed in a formalised subset of the popular Business Process Modelling and Notation (BPMN) language. To capture uncertainty and unreliability in workflows, we extend this formalism...... of the system being modelled. From these calculations, a comprehensive fault tree is generated. Further, we show that annotating the model with rewards (data) allows the expected mean values of reward structures to be calculated at points of failure....

  9. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    Science.gov (United States)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  10. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    Science.gov (United States)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  11. Modeling fault among motorcyclists involved in crashes.

    Science.gov (United States)

    Haque, Md Mazharul; Chin, Hoong Chor; Huang, Helai

    2009-03-01

    Singapore crash statistics from 2001 to 2006 show that the motorcyclist fatality and injury rates per registered vehicle are higher than those of other motor vehicles by 13 and 7 times, respectively. The crash involvement rate of motorcyclists as victims of other road users is also about 43%. The objective of this study is to identify the factors that contribute to the fault of motorcyclists involved in crashes. This is done by using the binary logit model to differentiate between at-fault and not-at-fault cases and the analysis is further categorized by the location of the crashes, i.e., at intersections, on expressways and at non-intersections. A number of explanatory variables representing roadway characteristics, environmental factors, motorcycle descriptions, and rider demographics have been evaluated. Time trend effect shows that not-at-fault crash involvement of motorcyclists has increased with time. The likelihood of night time crashes has also increased for not-at-fault crashes at intersections and expressways. The presence of surveillance cameras is effective in reducing not-at-fault crashes at intersections. Wet-road surfaces increase at-fault crash involvement at non-intersections. At intersections, not-at-fault crash involvement is more likely on single-lane roads or on median lane of multi-lane roads, while on expressways at-fault crash involvement is more likely on the median lane. Roads with higher speed limit have higher at-fault crash involvement and this is also true on expressways. Motorcycles with pillion passengers or with higher engine capacity have higher likelihood of being at-fault in crashes on expressways. Motorcyclists are more likely to be at-fault in collisions involving pedestrians and this effect is higher at night. In multi-vehicle crashes, motorcyclists are more likely to be victims than at-fault. Young and older riders are more likely to be at-fault in crashes than middle-aged group of riders. The findings of this study will help

  12. SDEM modelling of fault-propagation folding

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Poulsen, Jane Bang;

    2009-01-01

    -propagation-folding has already been the topic of a large number of empirical studies as well as physical and computational model experiments. However, with the newly developed Stress-based Discrete Element Method (SDEM), we have, for the first time, explored computationally the link between self-emerging fault patterns...... and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset...... on the master fault. The SDEM modelling enables us to evaluate quantitatively the rate of strain . A high strain rate and a step gradient indicate the presence of an active fault, whereas a low strain-rate and low gradient indicates no or very low deformation intensity. The strain-rate evolution thus gives...

  13. Mechanical Models of Fault-Related Folding

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  14. Fault Management for Efficient Data Gathering in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    M.Y. Mohamed Yacoab

    2012-12-01

    Full Text Available Wireless Sensor Networks (WSNs are naturally fault-prone owing to the common wireless communication medium, severe developed environments and resources limitation. In data gathering, node and network failures are common in WSNs. It is also essential for the WSN to be able to detect faults early and establish revival actions in order to avoid ruin of service due to faults. In this study we propose a fault management scheme which can efficiently gather data in wireless sensor networks. Our fault management scheme is capable of detecting network faults and node faults along with fault recovery. Initially, we assign some nodes as Reliable nodes (R nodes in the data aggregation tree, to perform accurate fault discovery and recovery. These R nodes collects the details of residual battery power and signal strength of all intermediate nodes. Node faults are detected by comparing the data values of each node with its neighbor and link failure are detected by estimating the Signal to Noise Ratio (SNR and Link Quality Indicator (LQI. In case of any link failure in the network, the succeeding R node will send a failure warning message to the previous R node and will then try to forward the packet to the next R node via an alternate path. By simulation results, we show that our proposed technique achieves good packet delivery ratio with reduced energy consumption and delay.

  15. A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    Science.gov (United States)

    Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

  16. Realization of User Level Fault Tolerant Policy Management through a Holistic Approach for Fault Correlation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byung H [ORNL; Naughton, III, Thomas J [ORNL; Agarwal, Pratul K [ORNL; Bernholdt, David E [ORNL; Geist, Al [ORNL; Tippens, Jennifer L [ORNL

    2011-01-01

    Many modern scientific applications, which are designed to utilize high performance parallel com- puters, occupy hundreds of thousands of computational cores running for days or even weeks. Since many scien- tists compete for resources, most supercomputing centers practice strict scheduling policies and perform meticulous accounting on their usage. Thus computing resources and time assigned to a user is considered invaluable. However, most applications are not well prepared for un- foreseeable faults, still relying on primitive fault tolerance techniques. Considering that ever-plunging mean time to interrupt (MTTI) is making scientific applications more vulnerable to faults, it is increasingly important to provide users not only an improved fault tolerant environment, but also a framework to support their own fault tolerance policies so that their allocation times can be best utilized. This paper addresses a user level fault tolerance policy management based on a holistic approach to digest and correlate fault related information. It introduces simple semantics with which users express their policies on faults, and illustrates how event correlation techniques can be applied to manage and determine the most preferable user policies. The paper also discusses an implementation of the framework using open source software, and demonstrates, as an example, how a molecular dynamics simulation application running on the institutional cluster at Oak Ridge National Laboratory benefits from it.

  17. Fault Detection under Fuzzy Model Uncertainty

    Institute of Scientific and Technical Information of China (English)

    Marek Kowal; Józef Korbicz

    2007-01-01

    The paper tackles the problem of robust fault detection using Takagi-Sugeno fuzzy models. A model-based strategy is employed to generate residuals in order to make a decision about the state of the process. Unfortunately, such a method is corrupted by model uncertainty due to the fact that in real applications there exists a model-reality mismatch. In order to ensure reliable fault detection the adaptive threshold technique is used to deal with the mentioned problem. The paper focuses also on fuzzy model design procedure. The bounded-error approach is applied to generating the rules for the model using available measurements. The proposed approach is applied to fault detection in the DC laboratory engine.

  18. Decentralized Fault Management for Service Dependability in Ubiquitous Networks

    DEFF Research Database (Denmark)

    Grønbæk, Lars Jesper

    2010-01-01

    Obtaining reliable operation of end-user services in future ubiquitous networking environments is challenging. Faults occur and heterogeneous networks make it difficult to deploy network wide fault management mechanisms. This PhD lecture presents a study on the options an end-node has to mitigate...

  19. Simple model of stacking-fault energies

    DEFF Research Database (Denmark)

    Stokbro, Kurt; Jacobsen, Lærke Wedel

    1993-01-01

    -density calculations of stacking-fault energies, and gives a simple way of understanding the calculated energy contributions from the different atomic layers in the stacking-fault region. The two parameters in the model describe the relative energy contributions of the s and d electrons in the noble and transition......A simple model for the energetics of stacking faults in fcc metals is constructed. The model contains third-nearest-neighbor pairwise interactions and a term involving the fourth moment of the electronic density of states. The model is in excellent agreement with recently published local...... metals, and thereby explain the pronounced differences in energetics in these two classes of metals. The model is discussed in the framework of the effective-medium theory where it is possible to find a functional form for the pair potential and relate the contribution associated with the fourth moment...

  20. Modeling Fluid Flow in Faulted Basins

    Directory of Open Access Journals (Sweden)

    Faille I.

    2014-07-01

    Full Text Available This paper presents a basin simulator designed to better take faults into account, either as conduits or as barriers to fluid flow. It computes hydrocarbon generation, fluid flow and heat transfer on the 4D (space and time geometry obtained by 3D volume restoration. Contrary to classical basin simulators, this calculator does not require a structured mesh based on vertical pillars nor a multi-block structure associated to the fault network. The mesh follows the sediments during the evolution of the basin. It deforms continuously with respect to time to account for sedimentation, erosion, compaction and kinematic displacements. The simulation domain is structured in layers, in order to handle properly the corresponding heterogeneities and to follow the sedimentation processes (thickening of the layers. In each layer, the mesh is unstructured: it may include several types of cells such as tetrahedra, hexahedra, pyramid, prism, etc. However, a mesh composed mainly of hexahedra is preferred as they are well suited to the layered structure of the basin. Faults are handled as internal boundaries across which the mesh is non-matching. Different models are proposed for fault behavior such as impervious fault, flow across fault or conductive fault. The calculator is based on a cell centered Finite Volume discretisation, which ensures conservation of physical quantities (mass of fluid, heat at a discrete level and which accounts properly for heterogeneities. The numerical scheme handles the non matching meshes and guaranties appropriate connection of cells across faults. Results on a synthetic basin demonstrate the capabilities of this new simulator.

  1. FSN-based fault modelling for fault detection and troubleshooting in CANDU stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, E., E-mail: elnara.nasimi@brucepower.com [Bruce Power LLP., Tiverton, Ontario(Canada); Gabbar, H.A. [Univ. of Ontario Inst. of Tech., Oshawa, Ontario (Canada)

    2013-07-01

    An accurate fault modeling and troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities of current and future generation of CANDU designs. This paper presents fault modeling approach using Fault Semantic Network (FSN) methodology with risk estimation. Its application is demonstrated using a case study of Bruce B zone-control level oscillations. (author)

  2. Stator Fault Modelling of Induction Motors

    DEFF Research Database (Denmark)

    Thomsen, Jesper Sandberg; Kallesøe, Carsten

    2006-01-01

    measurements from a specially designed induction motor. With this motor it is possible to simulate both terminal disconnections, inter-turn and turn-turn short circuits. The results show good agreement between the measurements and the simulated signals obtained from the model. In the tests focus......In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real...... is on the phase currents and the star point voltage as these signals are often used for fault detection....

  3. A Flexible Fault Management Architecture for Cluster Flight Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Emergent Space Technologies proposes to develop a flexible, service-oriented Fault Management (FM) architecture for cluster fight missions. This FM architecture will...

  4. Architecture Framework for Fault Management Assessment And Design (AFFMAD) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Architecture Framework for Fault Management Assessment And Design (AFFMAD) is a constraint-checking system for FM trade space exploration that provides rigorous...

  5. Analytical Approaches to Guide SLS Fault Management (FM) Development

    Science.gov (United States)

    Patterson, Jonathan D.

    2012-01-01

    Extensive analysis is needed to determine the right set of FM capabilities to provide the most coverage without significantly increasing the cost, reliability (FP/FN), and complexity of the overall vehicle systems. Strong collaboration with the stakeholders is required to support the determination of the best triggers and response options. The SLS Fault Management process has been documented in the Space Launch System Program (SLSP) Fault Management Plan (SLS-PLAN-085).

  6. Fault-diagnosis applications. Model-based condition monitoring. Acutators, drives, machinery, plants, sensors, and fault-tolerant systems

    Energy Technology Data Exchange (ETDEWEB)

    Isermann, Rolf [Technische Univ. Darmstadt (DE). Inst. fuer Automatisierungstechnik (IAT)

    2011-07-01

    Supervision, condition-monitoring, fault detection, fault diagnosis and fault management play an increasing role for technical processes and vehicles in order to improve reliability, availability, maintenance and lifetime. For safety-related processes fault-tolerant systems with redundancy are required in order to reach comprehensive system integrity. This book is a sequel of the book ''Fault-Diagnosis Systems'' published in 2006, where the basic methods were described. After a short introduction into fault-detection and fault-diagnosis methods the book shows how these methods can be applied for a selection of 20 real technical components and processes as examples, such as: Electrical drives (DC, AC) Electrical actuators Fluidic actuators (hydraulic, pneumatic) Centrifugal and reciprocating pumps Pipelines (leak detection) Industrial robots Machine tools (main and feed drive, drilling, milling, grinding) Heat exchangers Also realized fault-tolerant systems for electrical drives, actuators and sensors are presented. The book describes why and how the various signal-model-based and process-model-based methods were applied and which experimental results could be achieved. In several cases a combination of different methods was most successful. The book is dedicated to graduate students of electrical, mechanical, chemical engineering and computer science and for engineers. (orig.)

  7. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    Data.gov (United States)

    National Aeronautics and Space Administration — Sensor faults continue to be a major hurdle for sys- tems health management to reach its full potential. At the same time, few recorded instances of sensor faults...

  8. A System for Fault Management for NASA's Deep Space Habitat

    Science.gov (United States)

    Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.

  9. Nonlinear Model-Based Fault Detection for a Hydraulic Actuator

    NARCIS (Netherlands)

    Van Eykeren, L.; Chu, Q.P.

    2011-01-01

    This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost effect

  10. Guideliness for system modeling: fault tree [analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

  11. GEMS: A Fault Tolerant Grid Job Management System

    OpenAIRE

    Tadepalli, Sriram Satish

    2003-01-01

    The Grid environments are inherently unstable. Resources join and leave the environment without any prior notification. Application fault detection, checkpointing and restart is of foremost importance in the Grid environments. The need for fault tolerance is especially acute for large parallel applications since the failure rate grows with the number of processors and the duration of the computation. A Grid job management system hides the heterogeneity of the Grid and the complexity of the ...

  12. IP, ethernet and MPLS networks resource and fault management

    CERN Document Server

    Perez, André

    2013-01-01

    This book summarizes the key Quality of Service technologies deployed in telecommunications networks: Ethernet, IP, and MPLS. The QoS of the network is made up of two parts: fault and resource management. Network operation quality is among the functions to be fulfilled in order to offer QoS to the end user. It is characterized by four parameters: packet loss, delay, jitter or the variation of delay over time, and availability. Resource management employs mechanisms that enable the first three parameters to be guaranteed or optimized. Fault management aims to ensure continuity of service.

  13. The Design of Management Software for Network Device Faults

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Effective network management software ensures networks to run credibly, In this paper we discuss the design and implementation of network device fault management based on Pure Java. It includes designs of general functions, server functions, client functions and a database table. The software can make it convenient to monitoring a network device, and improve network efficiency.``

  14. Physiochemical Evidence of Faulting Processes and Modeling of Fluid in Evolving Fault Systems in Southern California

    Energy Technology Data Exchange (ETDEWEB)

    Boles, James [Professor

    2013-05-24

    Our study targets recent (Plio-Pleistocene) faults and young (Tertiary) petroleum fields in southern California. Faults include the Refugio Fault in the Transverse Ranges, the Ellwood Fault in the Santa Barbara Channel, and most recently the Newport- Inglewood in the Los Angeles Basin. Subsurface core and tubing scale samples, outcrop samples, well logs, reservoir properties, pore pressures, fluid compositions, and published structural-seismic sections have been used to characterize the tectonic/diagenetic history of the faults. As part of the effort to understand the diagenetic processes within these fault zones, we have studied analogous processes of rapid carbonate precipitation (scaling) in petroleum reservoir tubing and manmade tunnels. From this, we have identified geochemical signatures in carbonate that characterize rapid CO2 degassing. These data provide constraints for finite element models that predict fluid pressures, multiphase flow patterns, rates and patterns of deformation, subsurface temperatures and heat flow, and geochemistry associated with large fault systems.

  15. Completing fault models for abductive diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E. [Los Alamos National Lab., NM (United States); Cox, P.T.; Pietrzykowski, T. [Technical Univ., NS (Canada)

    1992-11-05

    In logic-based diagnosis, the consistency-based method is used to determine the possible sets of faulty devices. If the fault models of the devices are incomplete or nondeterministic, then this method does not necessarily yield abductive explanations of system behavior. Such explanations give additional information about faulty behavior and can be used for prediction. Unfortunately, system descriptions for the consistency-based method are often not suitable for abductive diagnosis. Methods for completing the fault models for abductive diagnosis have been suggested informally by Poole and by Cox et al. Here we formalize these methods by introducing a standard form for system descriptions. The properties of these methods are determined in relation to consistency-based diagnosis and compared to other ideas for integrating consistency-based and abductive diagnosis.

  16. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    Science.gov (United States)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  17. Faults

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  18. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.; Haves, Philip; Sohn, Michael D.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models are imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.

  19. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  20. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  1. A Fault-Cored Anticline Boundary Element Model Incorporating the Combined Fault Slip and Buckling Mechanisms

    Directory of Open Access Journals (Sweden)

    Wen-Jeng Huang

    2016-02-01

    Full Text Available We develop a folding boundary element model in a medium containing a fault and elastic layers to show that anticlines growing over slipping reverse faults can be significantly amplified by mechanical layering buckling under horizontal shortening. Previous studies suggested that folds over blind reverse faults grow primarily during deformation increments associated with slips on the fault during and immediately after earthquakes. Under this assumption, the potential for earthquakes on blind faults can be determined directly from fold geometry because the amount of slip on the fault can be estimated directly from the fold geometry using the solution for a dislocation in an elastic half-space. Studies that assume folds grown solely by slip on a fault may therefore significantly overestimate fault slip. Our boundary element technique demonstrates that the fold amplitude produced in a medium containing a fault and elastic layers with free slip and subjected to layer-parallel shortening can grow to more than twice the fold amplitude produced in homogeneous media without mechanical layering under the same amount of shortening. In addition, the fold wavelengths produced by the combined fault slip and buckling mechanisms may be narrower than folds produced by fault slip in an elastic half space by a factor of two. We also show that subsurface fold geometry of the Kettleman Hills Anticline in Central California inferred from seismic reflection image is consistent with a model that incorporates layer buckling over a dipping, blind reverse fault and the coseismic uplift pattern produced during a 1985 earthquake centered over the anticline forelimb is predicted by the model.

  2. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    Science.gov (United States)

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements

  3. A Real-Time Fault Management Software System for Distributed Environments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — DyMA-FM (Dynamic Multivariate Assessment for Fault Management) is a software architecture for real-time fault management. Designed to run in a distributed...

  4. Model Based Fault Detection in a Centrifugal Pump Application

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural consideration...... is capable of detecting four different faults in the mechanical and hydraulic parts of the pump.......A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...

  5. Bond graph model-based fault diagnosis of hybrid systems

    CERN Document Server

    Borutzky, Wolfgang

    2015-01-01

    This book presents a bond graph model-based approach to fault diagnosis in mechatronic systems appropriately represented by a hybrid model. The book begins by giving a survey of the fundamentals of fault diagnosis and failure prognosis, then recalls state-of-art developments referring to latest publications, and goes on to discuss various bond graph representations of hybrid system models, equations formulation for switched systems, and simulation of their dynamic behavior. The structured text: • focuses on bond graph model-based fault detection and isolation in hybrid systems; • addresses isolation of multiple parametric faults in hybrid systems; • considers system mode identification; • provides a number of elaborated case studies that consider fault scenarios for switched power electronic systems commonly used in a variety of applications; and • indicates that bond graph modelling can also be used for failure prognosis. In order to facilitate the understanding of fault diagnosis and the presented...

  6. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  7. Simple model for fault-charged hydrothermal systems

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Miller, C.W.; Benson, S.M.

    1981-06-01

    A two-dimensional transient model of fault-charged hydrothermal systems has been developed. The model can be used to analyze temperature data from fault-charged hydrothermal systems, estimate the recharge rate from the fault, and determine how long the system has been under natural development. The model can also be used for theoretical studies of the development of fault-controlled hydrothermal systems. The model has been tentatively applied to the low-temperature hydrothermal system at Susanville, California. A resonable match was obtained with the observed temperature data, and a hot water recharge rate of 9 x 10{sup -6} m{sup 3}s/m was calculated.

  8. Fault Diagnosis of Nonlinear Systems Using Structured Augmented State Models

    Institute of Scientific and Technical Information of China (English)

    Jochen Aβfalg; Frank Allg(o)wer

    2007-01-01

    This paper presents an internal model approach for modeling and diagnostic functionality design for nonlinear systems operating subject to single- and multiple-faults. We therefore provide the framework of structured augmented state models. Fault characteristics are considered to be generated by dynamical exosystems that are switched via equality constraints to overcome the augmented state observability limiting the number of diagnosable faults. Based on the proposed model, the fault diagnosis problem is specified as an optimal hybrid augmented state estimation problem. Sub-optimal solutions are motivated and exemplified for the fault diagnosis of the well-known three-tank benchmark. As the considered class of fault diagnosis problems is large, the suggested approach is not only of theoretical interest but also of high practical relevance.

  9. Fault Tolerance Design and Redundancy Management Techniques.

    Science.gov (United States)

    1980-09-01

    relations ontre los angles du tri4dre, a~rodynarnique et du triqbdre terrestre sin y = sin~coscxcosa - sinacosesin - cosB sincxcosecos (3 cosy sin Pi...the F-8 digital fly-by-wire aircraft and the space shuttle orbiter . Management of a multicomputer/sensor/actuator/power system is markedly different...for a flight in which the F-8 DFBW aircraft was simulating final approaches to Edwards Dry Lake by the shuttle orbiter . These approaches were

  10. Fault Modeling of ECL for High Fault Coverage of Physical Defects

    Directory of Open Access Journals (Sweden)

    Sankaran M. Menon

    1996-01-01

    Full Text Available Bipolar Emitter Coupled Logic (ECL devices can now be fabricated at higher densities and consumes much lower power. Behaviour of simple and complex ECL gates are examined in the presence of physical faults. The effectiveness of the classical stuck-at model in representing physical failures in ECL gates is examined. It is shown that the conventional stuck-at fault model cannot represent a majority of circuit level faults. A new augmented stuck-at fault model is presented which provides a significantly higher coverage of physical failures. The model may be applicable to other logic families that use logic gates with both true and complementary outputs. A design for testability approach is suggested for on-line detection of certain error conditions occurring in gates with true and complementary outputs which is a normal implementation for ECL devices.

  11. Sensor Fault Tolerant Generic Model Control for Nonlinear Systems

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A modified Strong Tracking Filter (STF) is used to develop a new approach to sensor fault tolerant control. Generic Model Control (GMC) is used to control the nonlinear process while the process runs normally because of its robust control performance. If a fault occurs in the sensor, a sensor bias vector is then introduced to the output equation of the process model. The sensor bias vector is estimated on-line during every control period using the STF. The estimated sensor bias vector is used to develop a fault detection mechanism to supervise the sensors. When a sensor fault occurs, the conventional GMC is switched to a fault tolerant control scheme, which is, in essence, a state estimation and output prediction based GMC. The laboratory experimental results on a three-tank system demonstrate the effectiveness of the proposed Sensor Fault Tolerant Generic Model Control (SFTGMC) approach.

  12. Diagnosing process faults using neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  13. Model-based fault diagnosis in PEM fuel cell systems

    Energy Technology Data Exchange (ETDEWEB)

    Escobet, T.; de Lira, S.; Puig, V.; Quevedo, J. [Automatic Control Department (ESAII), Universitat Politecnica de Catalunya (UPC), Rambla Sant Nebridi 10, 08222 Terrassa (Spain); Feroldi, D.; Riera, J.; Serra, M. [Institut de Robotica i Informatica Industrial (IRI), Consejo Superior de Investigaciones Cientificas (CSIC), Universitat Politecnica de Catalunya (UPC) Parc Tecnologic de Barcelona, Edifici U, Carrer Llorens i Artigas, 4-6, Planta 2, 08028 Barcelona (Spain)

    2009-07-01

    In this work, a model-based fault diagnosis methodology for PEM fuel cell systems is presented. The methodology is based on computing residuals, indicators that are obtained comparing measured inputs and outputs with analytical relationships, which are obtained by system modelling. The innovation of this methodology is based on the characterization of the relative residual fault sensitivity. To illustrate the results, a non-linear fuel cell simulator proposed in the literature is used, with modifications, to include a set of fault scenarios proposed in this work. Finally, it is presented the diagnosis results corresponding to these fault scenarios. It is remarkable that with this methodology it is possible to diagnose and isolate all the faults in the proposed set in contrast with other well known methodologies which use the binary signature matrix of analytical residuals and faults. (author)

  14. Fuzzy delay model based fault simulator for crosstalk delay fault test generation in asynchronous sequential circuits

    Indian Academy of Sciences (India)

    S Jayanthy; M C Bhuvaneswari

    2015-02-01

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps. The fault simulator based on fuzzy delay detects unstable states, oscillations and non-confluence of settling states in asynchronous sequential circuits. The fuzzy delay model based fault simulator is used to validate the test patterns produced by Elitist Non-dominated sorting Genetic Algorithm (ENGA) based test generator, for detecting crosstalk delay faults in asynchronous sequential circuits. The multi-objective genetic algorithm, ENGA targets two objectives of maximizing fault coverage and minimizing number of transitions. Experimental results are tabulated for SIS benchmark circuits for three gate delay models, namely unit delay model, rise/fall delay model and fuzzy delay model. Experimental results indicate that test validation using fuzzy delay model is more accurate than unit delay model and rise/fall delay model.

  15. Fault Management Architectures and the Challenges of Providing Software Assurance

    Science.gov (United States)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the

  16. Model Based Incipient Fault Detection for Gear Drives

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents the method of model based incipient fault detection for gear drives,this method is based on parity space method. It can generate the robust residual that is maximally sensitive to the fault caused by the change of the parameters. The example of simulation shows the application of the method, and the residual waves have different characteristics due to different parameter changes; one can detect and isolate the fault based on the different characteristics.

  17. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  18. A self-managing fault management mechanism for wireless sensor networks

    CERN Document Server

    Asim, Muhammad; Merabti, Madjid

    2010-01-01

    A sensor network can be described as a collection of sensor nodes which co-ordinate with each other to perform some specific function. These sensor nodes are mainly in large numbers and are densely deployed either inside the phenomenon or very close to it. They can be used for various application areas (e.g. health, military, home). Failures are inevitable in wireless sensor networks due to inhospitable environment and unattended deployment. Therefore, it is necessary that network failures are detected in advance and appropriate measures are taken to sustain network operation. We previously proposed a cellular approach for fault detection and recovery. In this paper we extend the cellular approach and propose a new fault management mechanism to deal with fault detection and recovery. We propose a hierarchical structure to properly distribute fault management tasks among sensor nodes by introducing more 'self-managing' functions. The proposed failure detection and recovery algorithm has been compared with some...

  19. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter;

    2016-01-01

    that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates.......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...

  20. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    Science.gov (United States)

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  1. Fault evolution-test dependency modeling for mechanical systems

    Institute of Scientific and Technical Information of China (English)

    Xiao-dong TAN; Jian-lu LUO; Qing LI; Bing LU; Jing QIU

    2015-01-01

    Tracking the process of fault growth in mechanical systems using a range of tests is important to avoid catastrophic failures. So, it is necessary to study the design for testability (DFT). In this paper, to improve the testability performance of me-chanical systems for tracking fault growth, a fault evolution-test dependency model (FETDM) is proposed to implement DFT. A testability analysis method that considers fault trackability and predictability is developed to quantify the testability performance of mechanical systems. Results from experiments on a centrifugal pump show that the proposed FETDM and testability analysis method can provide guidance to engineers to improve the testability level of mechanical systems.

  2. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management

    Science.gov (United States)

    Halicioglu, Kerem; Ozener, Haluk

    2008-01-01

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE–SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters – standard strike-slip model of dislocation theory in an elastic half-space – is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems. PMID:27873783

  3. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey for Disaster Management

    Directory of Open Access Journals (Sweden)

    Haluk Ozener

    2008-08-01

    Full Text Available Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE–SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters – standard strike-slip model of dislocation theory in an elastic half-space – is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems.

  4. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management.

    Science.gov (United States)

    Halicioglu, Kerem; Ozener, Haluk

    2008-08-19

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE-SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters - standard strike-slip model of dislocation theory in an elastic half-space - is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems.

  5. Development of a fault test experimental facility model using Matlab

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez; Moraes, Davi Almeida, E-mail: martinez@ipen.br, E-mail: dmoraes@dk8.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The Fault Test Experimental Facility was developed to simulate a PWR nuclear power plant and is instrumented with temperature, level and pressure sensors. The Fault Test Experimental Facility can be operated to generate normal and fault data, and these failures can be added initially small, and their magnitude being increasing gradually. This work presents the Fault Test Experimental Facility model developed using the Matlab GUIDE (Graphical User Interface Development Environment) toolbox that consists of a set of functions designed to create interfaces in an easy and fast way. The system model is based on the mass and energy inventory balance equations. Physical as well as operational aspects are taken into consideration. The interface layout looks like a process flowchart and the user can set the input variables. Besides the normal operation conditions, there is the possibility to choose a faulty variable from a list. The program also allows the user to set the noise level for the input variables. Using the model, data were generated for different operational conditions, both under normal and fault conditions with different noise levels added to the input variables. Data generated by the model will be compared with Fault Test Experimental Facility data. The Fault Test Experimental Facility theoretical model results will be used for the development of a Monitoring and Fault Detection System. (author)

  6. Modeling and Fault Simulation of Propellant Filling System

    Science.gov (United States)

    Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo

    2012-05-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  7. Insights into the damage zones in fault-bend folds from geomechanical models and field data

    Science.gov (United States)

    Ju, Wei; Hou, Guiting; Zhang, Bo

    2014-01-01

    Understanding the rock mass deformation and stress states, the fracture development and distribution are critical to a range of endeavors including oil and gas exploration and development, and geothermal reservoir characterization and management. Geomechanical modeling can be used to simulate the forming processes of faults and folds, and predict the onset of failure and the type and abundance of deformation features along with the orientations and magnitudes of stresses. This approach enables the development of forward models that incorporate realistic mechanical stratigraphy (e.g., the bed thickness, bedding planes and competence contrasts), include faults and bedding-slip surfaces as frictional sliding interfaces, reproduce the geometry of the fold structures, and allow tracking strain and stress through the whole deformation process. In this present study, we combine field observations and finite element models to calibrate the development and distribution of fractures in the fault-bend folds, and discuss the mechanical controls (e.g., the slip displacement, ramp cutoff angle, frictional coefficient of interlayers and faults) that are able to influence the development and distribution of fractures during fault-bend folding. A linear relationship between the slip displacement and the fracture damage zone, the ramp cutoff angle and the fracture damage zone, and the frictional coefficient of interlayers and faults and the fracture damage zone was established respectively based on the geomechanical modeling results. These mechanical controls mentioned above altogether contribute to influence and control the development and distribution of fractures in the fault-bend folds.

  8. Modelling earthquake ruptures with dynamic off-fault damage

    Science.gov (United States)

    Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban

    2017-04-01

    Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for

  9. Research and application of hierarchical model for multiple fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    An Ruoming; Jiang Xingwei; Song Zhengji

    2005-01-01

    Computational complexity of complex system multiple fault diagnosis is a puzzle at all times. Based on the well-known Mozetic's approach, a novel hierarchical model-based diagnosis methodology is put forward for improving efficiency of multi-fault recognition and localization. Structural abstraction and weighted fault propagation graphs are combined to build diagnosis model. The graphs have weighted arcs with fault propagation probabilities and propagation strength. For solving the problem of coupled faults, two diagnosis strategies are used: one is the Lagrangian relaxation and the primal heuristic algorithms; another is the method of propagation strength. Finally, an applied example shows the applicability of the approach and experimental results are given to show the superiority of the presented technique.

  10. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    Science.gov (United States)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from

  11. Stochastic finite-fault modelling of strong earthquakes in Narmada South Fault, Indian Shield

    Indian Academy of Sciences (India)

    P Sengupta

    2012-06-01

    The Narmada South Fault in the Indian peninsular shield region is associated with moderate-to-strong earthquakes. The prevailing hazard evidenced by the earthquake-related fatalities in the region imparts significance to the investigations of the seismogenic environment. In the present study, the prevailing seismotectonic conditions specified by parameters associated with source, path and site conditions are appraised. Stochastic finite-fault models are formulated for each scenario earthquake. The simulated peak ground accelerations for the rock sites from the possible mean maximum earthquake of magnitude 6.8 goes as high as 0.24 g while fault-rupture of magnitude 7.1 exhibits a maximum peak ground acceleration of 0.36 g. The results suggest that present hazard specification of Bureau of Indian Standards as inadequate. The present study is expected to facilitate development of ground motion models for deterministic and probabilistic seismic hazard analysis of the region.

  12. Autoregressive modelling for rolling element bearing fault diagnosis

    Science.gov (United States)

    Al-Bugharbee, H.; Trendafilova, I.

    2015-07-01

    In this study, time series analysis and pattern recognition analysis are used effectively for the purposes of rolling bearing fault diagnosis. The main part of the suggested methodology is the autoregressive (AR) modelling of the measured vibration signals. This study suggests the use of a linear AR model applied to the signals after they are stationarized. The obtained coefficients of the AR model are further used to form pattern vectors which are in turn subjected to pattern recognition for differentiating among different faults and different fault sizes. This study explores the behavior of the AR coefficients and their changes with the introduction and the growth of different faults. The idea is to gain more understanding about the process of AR modelling for roller element bearing signatures and the relation of the coefficients to the vibratory behavior of the bearings and their condition.

  13. Fault Model for Testable Reversible Toffoli Gates

    Directory of Open Access Journals (Sweden)

    Yu Pang

    2012-09-01

    Full Text Available Techniques of reversible circuits can be used in low-power microchips and quantum communications. Current most works focuses on synthesis of reversible circuits but seldom for fault testing which is sure to be an important step in any robust implementation. In this study, we propose a Universal Toffoli Gate (UTG with four inputs which can realize all basic Boolean functions. The all single stuck-at faults are analyzed and a test-set with minimum test vectors is given. Using the proposed UTG, it is easy to implement a complex reversible circuit and test all stuck-at faults of the circuit. The experiments show that reversible circuits constructed by the UTGs have less quantum cost and test vectors compared to other works.

  14. Salt movements and faulting of the overburden - can numerical modeling predict the fault patterns above salt structures?

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Wesenberg, Rasmus

    among other things the productivity due to the segmentation of the reservoir (Stewart 2006). 3D seismic data above salt structures can map such fault patterns in great detail and studies have shown that a variety of fault patterns exists. Yet, most patterns fall between two end members: concentric...... and radiating fault patterns. Here we use a modified version of the numerical spring-slider model introduced by Malthe-Sørenssen et al.(1998a) for simulating the emergence of small scale faults and fractures above a rising salt structure. The three-dimensional spring-slider model enables us to control....... The modeling shows that purely vertical movement of the salt introduces a mesh of concentric normal faults in the overburden, and that the frequency of radiating faults increases with the amount of lateral movements across the salt-overburden interface. The two end-member fault patterns (concentric vs...

  15. Hidden Markov Model Based Automated Fault Localization for Integration Testing

    OpenAIRE

    Ge, Ning; NAKAJIMA, SHIN; Pantel, Marc

    2013-01-01

    International audience; Integration testing is an expensive activity in software testing, especially for fault localization in complex systems. Model-based diagnosis (MBD) provides various benefits in terms of scalability and robustness. In this work, we propose a novel MBD approach for the automated fault localization in integration testing. Our method is based on Hidden Markov Model (HMM) which is an abstraction of system's component to simulate component's behaviour. The core of this metho...

  16. Assurance of Fault Management: Risk-Significant Adverse Condition Awareness

    Science.gov (United States)

    Fitz, Rhonda

    2016-01-01

    Fault Management (FM) systems are ranked high in risk-based assessment of criticality within flight software, emphasizing the importance of establishing highly competent domain expertise to provide assurance for NASA projects, especially as spaceflight systems continue to increase in complexity. Insight into specific characteristics of FM architectures seen embedded within safety- and mission-critical software systems analyzed by the NASA Independent Verification Validation (IVV) Program has been enhanced with an FM Technical Reference (TR) suite. Benefits are aimed beyond the IVV community to those that seek ways to efficiently and effectively provide software assurance to reduce the FM risk posture of NASA and other space missions. The identification of particular FM architectures, visibility, and associated IVV techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. The role FM has with regard to overall asset protection of flight software systems is being addressed with the development of an adverse condition (AC) database encompassing flight software vulnerabilities.Identification of potential off-nominal conditions and analysis to determine how a system responds to these conditions are important aspects of hazard analysis and fault management. Understanding what ACs the mission may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Research efforts sponsored by NASAs Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs, and allowing queries based on project, mission

  17. Nonlinear sensor fault diagnosis using mixture of probabilistic PCA models

    Science.gov (United States)

    Sharifi, Reza; Langari, Reza

    2017-02-01

    This paper presents a methodology for sensor fault diagnosis in nonlinear systems using a Mixture of Probabilistic Principal Component Analysis (MPPCA) models. This methodology separates the measurement space into several locally linear regions, each of which is associated with a Probabilistic PCA (PPCA) model. Using the transformation associated with each PPCA model, a parity relation scheme is used to construct a residual vector. Bayesian analysis of the residuals forms the basis for detection and isolation of sensor faults across the entire range of operation of the system. The resulting method is demonstrated in its application to sensor fault diagnosis of a fully instrumented HVAC system. The results show accurate detection of sensor faults under the assumption that a single sensor is faulty.

  18. Fault Tolerant Control Using Gaussian Processes and Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Yang Xiaoke

    2015-03-01

    Full Text Available Essential ingredients for fault-tolerant control are the ability to represent system behaviour following the occurrence of a fault, and the ability to exploit this representation for deciding control actions. Gaussian processes seem to be very promising candidates for the first of these, and model predictive control has a proven capability for the second. We therefore propose to use the two together to obtain fault-tolerant control functionality. Our proposal is illustrated by several reasonably realistic examples drawn from flight control.

  19. Scalable Fault-Tolerant Location Management Scheme for Mobile IP

    Directory of Open Access Journals (Sweden)

    JinHo Ahn

    2001-11-01

    Full Text Available As the number of mobile nodes registering with a network rapidly increases in Mobile IP, multiple mobility (home of foreign agents can be allocated to a network in order to improve performance and availability. Previous fault tolerant schemes (denoted by PRT schemes to mask failures of the mobility agents use passive replication techniques. However, they result in high failure-free latency during registration process if the number of mobility agents in the same network increases, and force each mobility agent to manage bindings of all the mobile nodes registering with its network. In this paper, we present a new fault-tolerant scheme (denoted by CML scheme using checkpointing and message logging techniques. The CML scheme achieves low failure-free latency even if the number of mobility agents in a network increases, and improves scalability to a large number of mobile nodes registering with each network compared with the PRT schemes. Additionally, the CML scheme allows each failed mobility agent to recover bindings of the mobile nodes registering with the mobility agent when it is repaired even if all the other mobility agents in the same network concurrently fail.

  20. Stator Fault Detection in Induction Motors by Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    Francisco M. Garcia-Guevara

    2016-01-01

    Full Text Available This study introduces a novel methodology for early detection of stator short circuit faults in induction motors by using autoregressive (AR model. The proposed algorithm is based on instantaneous space phasor (ISP module of stator currents, which are mapped to α-β stator-fixed reference frame; then, the module is obtained, and the coefficients of the AR model for such module are estimated and evaluated by order selection criterion, which is used as fault signature. For comparative purposes, a spectral analysis of the ISP module by Discrete Fourier Transform (DFT is performed; a comparison of both methodologies is obtained. To demonstrate the suitability of the proposed methodology for detecting and quantifying incipient short circuit stator faults, an induction motor was altered to induce different-degree fault scenarios during experimentation.

  1. Phase response curves for models of earthquake fault dynamics

    CERN Document Server

    Franović, Igor; Perc, Matjaz; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-01-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a ...

  2. Phase response curves for models of earthquake fault dynamics

    Science.gov (United States)

    Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-06-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  3. Modeling Technology in Traveling-Wave Fault Location

    Directory of Open Access Journals (Sweden)

    Tang Jinrui

    2013-06-01

    Full Text Available Theoretical research and equipment development of traveling-wave fault location seriously depend on digital simulation. Meanwhile, the fault-generated transient traveling wave must be transferred through transmission line, mutual inductor and secondary circuit before it is used. So this paper would maily analyze and summarize the modeling technology of transmission line and mutual inductor on the basis of the research achievement. Firstly several models of transmission line (multiple Π or T line model, Bergeron line model and frequency-dependent line model are compared in this paper with analysis of wave-front characteristics and characteristic frequency of traveling wave. Then modeling methods of current transformer, potential transformer, capacitive voltage transformer, special traveling-wave sensor and secondary cable are given. Finally, based on the difficult and latest research achievements, the future trend of modeling technology in traveling-wave fault location is prospected.  

  4. Overview of the Southern San Andreas Fault Model

    Science.gov (United States)

    Weldon, Ray J.; Biasi, Glenn P.; Wills, Chris J.; Dawson, Timothy E.

    2008-01-01

    This appendix summarizes the data and methodology used to generate the source model for the southern San Andreas fault. It is organized into three sections, 1) a section by section review of the geological data in the format of past Working Groups, 2) an overview of the rupture model, and 3) a manuscript by Biasi and Weldon (in review Bulletin of the Seismological Society of America) that describes the correlation methodology that was used to help develop the ?geologic insight? model. The goal of the Biasi and Weldon methodology is to quantify the insight that went into developing all A faults; as such it is in concept consistent with all other A faults but applied in a more quantitative way. The most rapidly slipping fault and the only known source of M~8 earthquakes in southern California is the San Andreas fault. As such it plays a special role in the seismic hazard of California, and has received special attention in the current Working Group. The underlying philosophy of the current Working Group is to model the recurrence behavior of large, rapidly slipping faults like the San Andreas from observed data on the size, distribution and timing of past earthquakes with as few assumptions about underlying recurrence behavior as possible. In addition, we wish to carry the uncertainties in the data and the range of reasonable extrapolations from the data to the final model. To accomplish this for the Southern San Andreas fault we have developed an objective method to combine all of the observations of size, timing, and distribution of past earthquakes into a comprehensive set of earthquake scenarios that each represent a possible history of earthquakes for the past ~1400 years. The scenarios are then ranked according to their overall consistency with the data and then the frequencies of all of the ruptures permitted by the current Working Group?s segmentation model are calculated. We also present 30-yr conditional probabilities by segment and compare to previous

  5. Orion GN&C Fault Management System Verification: Scope And Methodology

    Science.gov (United States)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  6. Open-Switch Fault Diagnosis and Fault Tolerant for Matrix Converter with Finite Control Set-Model Predictive Control

    DEFF Research Database (Denmark)

    Peng, Tao; Dan, Hanbing; Yang, Jian

    2016-01-01

    To improve the reliability of the matrix converter (MC), a fault diagnosis method to identify single open-switch fault is proposed in this paper. The introduced fault diagnosis method is based on finite control set-model predictive control (FCS-MPC), which employs a time-discrete model of the MC...... topology and a cost function to select the best switching state for the next sampling period. The proposed fault diagnosis method is realized by monitoring the load currents and judging the switching state to locate the faulty switch. Compared to the conventional modulation strategies such as carrier...

  7. Modelling fault surface roughness and fault rocks thickness evolution with slip: calibration based on field and laboratory data

    Science.gov (United States)

    Bistacchi, A.; Tisato, N.; Spagnuolo, E.; Nielsen, S. B.; Di Toro, G.

    2012-12-01

    The architecture and physical properties of fault zones evolve with slip and time. Such evolution, which progressively modifies the type and thickness of fault rocks, the fault surface roughness, etc., controls the rheology of fault zones (seismic vs. aseismic) and earthquakes (main shock magnitude, coseismic slip distribution, stress drop, foreshock and aftershock sequence evolution, etc.). Seismogenic faults exhumed from 2-10 km depth and hosted in different rocks (carbonates, granitoids, etc.) show a (1) self-affine (Hurst exponent H definition of "wear" (including every process that destroys geometrical asperities and produces fault rocks). The output roughness and fault rock thickness depend on two parameters: (1) wear rate and (2) wear products (fault rocks) accumulation rate. To test the model we used surface roughness, fault rock thickness, and slip data collected in the field (Gole Larghe Fault Zone, Italian Southern Alps) and in the lab (rotary shear experiments on different rocks). The model was successful in predicting the first-order evolution of roughness and of fault rock thickness with slip in both natural and experimental datasets. Differences in best-fit model parameters (wear rate and wear products accumulation rate) were satisfactorily explained in terms of different deformation processes (e.g. frictional melting vs. cataclasis) and experimental conditions (unconfined vs. confined). Since the model is based on geometrical and volume-conservation considerations (and not on a particular deformation mechanism), we conclude that the surface roughness and fault-rock thickness after some slip is mostly determined by the initial roughness (measured over several orders of magnitude in wavelength), rather than the particular deformation process (cataclasis, melting, etc.) activated during faulting. Conveniently, since the model can be applied (under certain conditions) to surfaces which depart from self-affine roughness, the model parameters can be

  8. New Frontiers in Fault Model Visualization and Interaction

    Science.gov (United States)

    van Aalsburg, J.; Yikilmaz, M. B.; Kreylos, O.; Kellogg, L. H.; Rundle, J. B.

    2009-12-01

    Previously we introduced an interactive, 3D fault editor for virtual reality (VR) environments. This application is designed to provide an intuitive environment for visualizing and editing fault model data. It is being developed at the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://www.keckcaves.org). By utilizing high resolution Digital Elevation Models (DEM), georeferenced active tectonic fault maps and earthquake hypocenters, users can accurately position fault-segments including the dip angle. Once a model has been created or modified it can be written to an XML file; from there the data may be easily converted into various formats required by the analysis software or simulation. To demonstrate this we have written a simple program which generates a KML file from the program output for visualization of the model in Google Earth. Our current research has focused on the addition of new tools which enable the user to associate meta-data with individual fault segments or group of segments (i.e. slip rate). We have also added enhanced mapping abilities such as creating closed polygons for defining geologic formations. The program is designed to take full advantage of immersive environments such as a CAVE (walk-in VR environment), but works in a wide range of other environments including desktop systems and GeoWalls. This software is open-source can be freely downloaded (debian packages are also available).

  9. Inter-organizational fault management: Functional and organizational core aspects of management architectures

    CERN Document Server

    Marcu, Patricia; 10.5121/ijcnc.2011.3107

    2011-01-01

    Outsourcing -- successful, and sometimes painful -- has become one of the hottest topics in IT service management discussions over the past decade. IT services are outsourced to external service provider in order to reduce the effort required for and overhead of delivering these services within the own organization. More recently also IT services providers themselves started to either outsource service parts or to deliver those services in a non-hierarchical cooperation with other providers. Splitting a service into several service parts is a non-trivial task as they have to be implemented, operated, and maintained by different providers. One key aspect of such inter-organizational cooperation is fault management, because it is crucial to locate and solve problems, which reduce the quality of service, quickly and reliably. In this article we present the results of a thorough use case based requirements analysis for an architecture for inter-organizational fault management (ioFMA). Furthermore, a concept of th...

  10. Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry

    Energy Technology Data Exchange (ETDEWEB)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford; Richard Rusaw

    2014-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe the distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.

  11. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory.

    Science.gov (United States)

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-18

    Sensor data fusion plays an important role in fault diagnosis. Dempster-Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  12. Constitutive models of faults in the viscoelastic lithosphere

    Science.gov (United States)

    Moresi, Louis; Muhlhaus, Hans; Mansour, John; Miller, Meghan

    2013-04-01

    Moresi and Muhlhaus (2006) presented an algorithm for describing shear band formation and evolution as a coallescence of small, planar, fricition-failure surfaces. This algorithm assumed that sliding initially occurs at the angle to the maximum compressive stress dictated by Anderson faulting theory and demonstrated that shear bands form with the same angle as the microscopic angle of initial failure. Here we utilize the same microscopic model to generate frictional slip on prescribed surfaces which represent faults of arbitrary geometry in the viscoelastic lithosphere. The faults are actually represented by anisotropic weak zones of finite width, but they are instantiated from a 2D manifold represented by a cloud of points with associated normals and mechanical/history properties. Within the hybrid particle / finite-element code, Underworld, this approach gives a very flexible mechanism for describing complex 3D geometrical patterns of faults with no need to mirror this complexity in the thermal/mechanical solver. We explore a number of examples to demonstrate the strengths and weaknesses of this particular approach including a 3D model of the deformation of Southern California which accounts for the major fault systems. L. Moresi and H.-B. Mühlhaus, Anisotropic viscous models of large-deformation Mohr-Coulomb failure. Philosophical Magazine, 86:3287-3305, 2006.

  13. An Approach to Computer Modeling of Geological Faults in 3D and an Application

    Institute of Scientific and Technical Information of China (English)

    ZHU Liang-feng; HE Zheng; PAN Xin; WU Xin-cai

    2006-01-01

    3D geological modeling, one of the most important applications in geosciences of 3D GIS, forms the basis and is a prerequisite for visualized representation and analysis of 3D geological data. Computer modeling of geological faults in 3D is currently a topical research area. Structural modeling techniques of complex geological entities containing reverse faults are discussed and a series of approaches are proposed. The geological concepts involved in computer modeling and visualization of geological fault in 3D are explained, the type of data of geological faults based on geological exploration is analyzed, and a normative database format for geological faults is designed. Two kinds of modeling approaches for faults are compared: a modeling technique of faults based on stratum recovery and a modeling technique of faults based on interpolation in subareas. A novel approach, called the Unified Modeling Technique for stratum and fault, is presented to solve the puzzling problems of reverse faults, syn-sedimentary faults and faults terminated within geological models. A case study of a fault model of bed rock in the Beijing Olympic Green District is presented in order to show the practical result of this method. The principle and the process of computer modeling of geological faults in 3D are discussed and a series of applied technical proposals established. It strengthens our profound comprehension of geological phenomena and the modeling approach, and establishes the basic techniques of 3D geological modeling for practical applications in the field of geosciences.

  14. Fuzzy model-based observers for fault detection in CSTR.

    Science.gov (United States)

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions.

  15. Geomechanical modeling of stress and strain evolution during contractional fault-related folding

    Science.gov (United States)

    Smart, Kevin J.; Ferrill, David A.; Morris, Alan P.; McGinnis, Ronald N.

    2012-11-01

    Understanding stress states and rock mass deformation deep underground is critical to a range of endeavors including oil and gas exploration and production, geothermal reservoir characterization and management, and subsurface disposal of CO2. Geomechanical modeling can predict the onset of failure and the type and abundance of deformation features along with the orientations and magnitudes of stresses. This approach enables development of forward models that incorporate realistic mechanical stratigraphy (e.g., including competence contrasts, bed thicknesses, and bedding planes), include faults and bedding-slip surfaces as frictional sliding interfaces, reproduce the overall geometry of the fold structures of interest, and allow tracking of stress and strain through the deformation history. Use of inelastic constitutive relationships (e.g., elastic-plastic behavior) allows permanent strains to develop in response to the applied loads. This ability to capture permanent deformation is superior to linear elastic models, which are often used for numerical convenience, but are incapable of modeling permanent deformation or predicting permanent deformation processes such as faulting, fracturing, and pore collapse. Finite element modeling results compared with field examples of a natural contractional fault-related fold show that well-designed geomechanical modeling can match overall fold geometries and be applied to stress, fracture, and subseismic fault prediction in geologic structures. Geomechanical modeling of this type allows stress and strain histories to be obtained throughout the model domain.

  16. Numerical modelling of fault reactivation in carbonate rocks under fluid depletion conditions - 2D generic models with a small isolated fault

    Science.gov (United States)

    Zhang, Yanhua; Clennell, Michael B.; Delle Piane, Claudio; Ahmed, Shakil; Sarout, Joel

    2016-12-01

    This generic 2D elastic-plastic modelling investigated the reactivation of a small isolated and critically-stressed fault in carbonate rocks at a reservoir depth level for fluid depletion and normal-faulting stress conditions. The model properties and boundary conditions are based on field and laboratory experimental data from a carbonate reservoir. The results show that a pore pressure perturbation of -25 MPa by depletion can lead to the reactivation of the fault and parts of the surrounding damage zones, producing normal-faulting downthrows and strain localization. The mechanism triggering fault reactivation in a carbonate field is the increase of shear stresses with pore-pressure reduction, due to the decrease of the absolute horizontal stress, which leads to an expanded Mohr's circle and mechanical failure, consistent with the predictions of previous poroelastic models. Two scenarios for fault and damage-zone permeability development are explored: (1) large permeability enhancement of a sealing fault upon reactivation, and (2) fault and damage zone permeability development governed by effective mean stress. In the first scenario, the fault becomes highly permeable to across- and along-fault fluid transport, removing local pore pressure highs/lows arising from the presence of the initially sealing fault. In the second scenario, reactivation induces small permeability enhancement in the fault and parts of damage zones, followed by small post-reactivation permeability reduction. Such permeability changes do not appear to change the original flow capacity of the fault or modify the fluid flow velocity fields dramatically.

  17. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  18. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  19. Model-based fault detection and diagnosis in ALMA subsystems

    Science.gov (United States)

    Ortiz, José; Carrasco, Rodrigo A.

    2016-07-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) observatory, with its 66 individual telescopes and other central equipment, generates a massive set of monitoring data every day, collecting information on the performance of a variety of critical and complex electrical, electronic and mechanical components. This data is crucial for most troubleshooting efforts performed by engineering teams. More than 5 years of accumulated data and expertise allow for a more systematic approach to fault detection and diagnosis. This paper presents model-based fault detection and diagnosis techniques to support corrective and predictive maintenance in a 24/7 minimum-downtime observatory.

  20. Electric machines modeling, condition monitoring, and fault diagnosis

    CERN Document Server

    Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun

    2012-01-01

    With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi

  1. Stresses, deformation, and seismic events on scaled experimental faults with heterogeneous fault segments and comparison to numerical modeling

    Science.gov (United States)

    Buijze, Loes; Guo, Yanhuang; Niemeijer, André R.; Ma, Shengli; Spiers, Christopher J.

    2017-04-01

    Faults in the upper crust cross-cut many different lithologies, which cause the composition of the fault rocks to vary. Each different fault rock segment may have specific mechanical properties, e.g. there may be stronger and weaker segments, and segments prone to unstable slip or creeping. This leads to heterogeneous deformation and stresses along such faults, and a heterogeneous distribution of seismic events. We address the influence of fault variability on stress, deformation, and seismicity using a combination of scaled laboratory fault and numerical modeling. A vertical fault was created along the diagonal of a 30 x 20 x 5 cm block of PMMA, along which a 2 mm thick gouge layer was deposited. Gouge materials of different characteristics were used to create various segments along the fault; quartz (average strength, stable sliding), kaolinite (weak, stable sliding), and gypsum (average strength, unstable sliding). The sample assembly was placed in a horizontal biaxial deformation apparatus, and shear displacement was enforced along the vertical fault. Multiple observations were made: 1) Acoustic emissions were continuously recorded at 3 MHz to observe the occurrence of stick-slips (micro-seismicity), 2) Photo-elastic effects (indicative of the differential stress) were recorded in the transparent set of PMMA wall-rocks using a high-speed camera, and 3) particle tracking was conducted on a speckle painted set of PMMA wall-rocks to study the deformation in the wall-rocks flanking the fault. All three observation methods show how the heterogeneous fault gouge exerts a strong control on the fault behavior. For example, a strong, unstable segment of gypsum flanked by two weaker kaolinite segments show strong stress concentrations develop near the edges of the strong segment, with at the same time most of acoustic emissions being located at the edge of this strong segment. The measurements of differential stress, strain and acoustic emissions provide a strong means

  2. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    Science.gov (United States)

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  3. Modeling and characterization of partially inserted electrical connector faults

    Science.gov (United States)

    Tokgöz, ćaǧatay; Dardona, Sameh; Soldner, Nicholas C.; Wheeler, Kevin R.

    2016-03-01

    Faults within electrical connectors are prominent in avionics systems due to improper installation, corrosion, aging, and strained harnesses. These faults usually start off as undetectable with existing inspection techniques and increase in magnitude during the component lifetime. Detection and modeling of these faults are significantly more challenging than hard failures such as open and short circuits. Hence, enabling the capability to locate and characterize the precursors of these faults is critical for timely preventive maintenance and mitigation well before hard failures occur. In this paper, an electrical connector model based on a two-level nonlinear least squares approach is proposed. The connector is first characterized as a transmission line, broken into key components such as the pin, socket, and connector halves. Then, the fact that the resonance frequencies of the connector shift as insertion depth changes from a fully inserted to a barely touching contact is exploited. The model precisely captures these shifts by varying only two length parameters. It is demonstrated that the model accurately characterizes a partially inserted connector.

  4. Singular limit analysis of a model for earthquake faulting

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...

  5. A Real-Time Fault Management Software System for Distributed Environments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault Management (FM) is critical to mission operations and particularly so for complex instruments – such as those used for aircraft and spacecraft. FM...

  6. A FINITE ELEMENT MODEL FOR SEISMICITY INDUCED BY FAULT INTERACTION

    Institute of Scientific and Technical Information of China (English)

    Chen Huaran; Li Yiqun; He Qiaoyun; Zhang Jieqing; Ma Hongsheng; Li Li

    2003-01-01

    On ths basis of interaction between faults, a finite element model for Southwest China is constructed, and the stress adjustment due to the strong earthquake occurrence in this region was studied. The preliminary results show that many strong earthquakes occurred in the area of increased stress in the model. Though the results are preliminary, the quasi-3D finite element model is meaningful for strong earthquake prediction.

  7. A FINITE ELEMENT MODEL FOR SEISMICITY INDUCED BY FAULT INTERACTION

    Institute of Scientific and Technical Information of China (English)

    ChenHuaran; LiYiqun; HeQiaoyun; ZhangJieqing; MaHongsheng; LiLi

    2003-01-01

    On ths basis of interaction between faults, a finite element model for Southwest China is constructed, and the stress adjustment due to the strong earthquake occurrence in this region was studied. The preliminary results show that many strong earthquakes occurred in the are a of increased stress in the model. Though the results are preliminary, the quasi-3D finite element model is meaningful for strong earthquake prediction.

  8. Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions

    Science.gov (United States)

    Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.

    2010-01-01

    Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the

  9. Modeling of Stress Triggered Faulting at Agenor Linea, Europa

    Science.gov (United States)

    Nahm, A. L.; Cameron, M. E.; Smith-Konter, B. R.; Pappalardo, R. T.

    2012-04-01

    To better understand the role of tidal stress sources and implications for faulting on Europa, we investigate the relationship between shear and normal stresses at Agenor Linea (AL), a ~1500 km long, E-W trending, 20-30 km wide zone of geologically young deformation located in the southern hemisphere of Europa which forks into two branches at its eastern end. The orientation of AL is consistent with tensile stresses resulting from long-term decoupled ice shell rotation (non-synchronous rotation [NSR]) as well as dextral shear stresses due to diurnal flexure of the ice shell. Its brightness and lack of cross-cutting features make AL a candidate for recent or current activity. Several observations indicate that right-lateral strike-slip faulting has occurred, such as left-stepping en echelon fractures in the northern portion of AL and the presence of an imbricate fan or horsetail complex at AL's western end. To calculate tidal stresses on Europa, we utilize SatStress, a numerical code that calculates tidal stresses at any point on the surface of a satellite for both diurnal and NSR stresses. We adopt SatStress model parameters appropriate to a spherically symmetric ice shell of thickness 20 km, underlain by a global subsurface ocean: shear modulus G = 3.5 GPa, Poisson ratio ν = 0.33, gravity g= 1.32 m/s2, ice density ρ = 920 kg/m3, satellite radius R= 1.56 x 103 km, satellite mass M= 4.8 x 1022 kg, semimajor axis a= 6.71 x 105 km, and eccentricity e= 0.0094. In this study we assume a coefficient of friction μ = 0.6 and consider a range of vertical fault depths zto 6 km. To assess shear failure at AL, we adopt a model based on the Coulomb failure criterion. This model balances stresses that promote and resist the motion of a fault, simultaneously accounting for both normal and shear tidal and NSR stresses, the coefficient of friction of ice, and additional stress at depth due to the overburden pressure. In this model, tidal shear stresses drive strike-slip motions

  10. Experimental Modeling of Dynamic Shallow Dip-Slip Faulting

    Science.gov (United States)

    Uenishi, K.

    2010-12-01

    In our earlier study (AGU 2005, SSJ 2005, JPGU 2006), using a finite difference technique, we have conducted some numerical simulations related to the source dynamics of shallow dip-slip earthquakes, and suggested the possibility of the existence of corner waves, i.e., shear waves that carry concentrated kinematic energy and generate extremely strong particle motions on the hanging wall of a nonvertical fault. In the numerical models, a dip-slip fault is located in a two-dimensional, monolithic linear elastic half space, and the fault plane dips either vertically or 45 degrees. We have investigated the seismic wave field radiated by crack-like rupture of this straight fault. If the fault rupture, initiated at depth, arrests just below or reaches the free surface, four Rayleigh-type pulses are generated: two propagating along the free surface into the opposite directions to the far field, the other two moving back along the ruptured fault surface (interface) downwards into depth. These downward interface pulses may largely control the stopping phase of the dynamic rupture, and in the case the fault plane is inclined, on the hanging wall the interface pulse and the outward-moving Rayleigh surface pulse interact with each other and the corner wave is induced. On the footwall, the ground motion is dominated simply by the weaker Rayleigh pulse propagating along the free surface because of much smaller interaction between this Rayleigh and the interface pulse. The generation of the downward interface pulses and corner wave may play a crucial role in understanding the effects of the geometrical asymmetry on the strong motion induced by shallow dip-slip faulting, but it has not been well recognized so far, partly because those waves are not expected for a fault that is located and ruptures only at depth. However, the seismological recordings of the 1999 Chi-Chi, Taiwan, the 2004 Niigata-ken Chuetsu, Japan, earthquakes as well as a more recent one in Iwate-Miyagi Inland

  11. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    Science.gov (United States)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  12. Model-Based Methods for Fault Diagnosis: Some Guide-Lines

    DEFF Research Database (Denmark)

    Patton, R.J.; Chen, J.; Nielsen, S.B.

    1995-01-01

    This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties.......This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties....

  13. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    Science.gov (United States)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASAs Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing

  14. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    Science.gov (United States)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this

  15. Active Fault Diagnosis and Assessment for Aircraft Health Management Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To address the NASA LaRC need for innovative methods and tools for the diagnosis of aircraft faults and failures, Physical Optics Corporation (POC) proposes to...

  16. Fault diagnosis model based on multi-manifold learning and PSO-SVM for machinery

    Institute of Scientific and Technical Information of China (English)

    Wang Hongjun; Xu Xiaoli; Rosen B G

    2014-01-01

    Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold learning and particle swarm optimization support vector machine (PSO-SVM) is studied. This fault diagnosis model is used for a rolling bearing experimental of three kinds faults. The results are verified that this model based on multi-manifold learning and PSO-SVM is good at the fault sensitive features acquisition with effective accuracy.

  17. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  18. Multiple Local Reconstruction Model-based Fault Diagnosis for Continuous Processes

    Institute of Scientific and Technical Information of China (English)

    ZHAO Chun-Hui; LI Wen-Qing; SUN You-Xian; GAO Fu-Rong

    2013-01-01

    In the present work,the multiplicity of fault characteristics is proposed and analyzed to improve the fault diagnosis performance.It is based on the following recognition that the underlying fault characteristics in general do not stay constant but will present changes along the time direction.That is,the fault process reveals different variable correlations across different time periods.To analyze the multiplicity of fault characteristics,a fault division algorithm is developed to divide the fault process into multiple local time periods where the fault characteristics are deemed similar within the same local time period.Then a representative fault decomposition model is built in each local time period to reveal the relationships between the fault and normal operation status.In this way,these different fault characteristics can be modeled respectively.The proposed method gives an interesting insight into the fault evolvement behaviors and a more accurate from-fault-to-normal reconstruction result can be expected for fault diagnosis.The feasibility and performance of the proposed fault diagnosis method are illustrated with the Tennessee Eastman process.

  19. Transposing an active fault database into a seismic hazard fault model for nuclear facilities - Part 1: Building a database of potentially active faults (BDFA) for metropolitan France

    Science.gov (United States)

    Jomard, Hervé; Cushing, Edward Marc; Palumbo, Luigi; Baize, Stéphane; David, Claire; Chartier, Thomas

    2017-09-01

    The French Institute for Radiation Protection and Nuclear Safety (IRSN), with the support of the Ministry of Environment, compiled a database (BDFA) to define and characterize known potentially active faults of metropolitan France. The general structure of BDFA is presented in this paper. BDFA reports to date 136 faults and represents a first step toward the implementation of seismic source models that would be used for both deterministic and probabilistic seismic hazard calculations. A robustness index was introduced, highlighting that less than 15 % of the database is controlled by reasonably complete data sets. An example of transposing BDFA into a fault source model for PSHA (probabilistic seismic hazard analysis) calculation is presented for the Upper Rhine Graben (eastern France) and exploited in the companion paper (Chartier et al., 2017, hereafter Part 2) in order to illustrate ongoing challenges for probabilistic fault-based seismic hazard calculations.

  20. Modeling and simulation of longwall scraper conveyor considering operational faults

    Science.gov (United States)

    Cenacewicz, Krzysztof; Katunin, Andrzej

    2016-06-01

    The paper provides a description of analytical model of a longwall scraper conveyor, including its electrical, mechanical, measurement and control actuating systems, as well as presentation of its implementation in the form of computer simulator in the Matlab®/Simulink® environment. Using this simulator eight scenarios typical of usual operational conditions of an underground scraper conveyor can be generated. Moreover, the simulator provides a possibility of modeling various operational faults and taking into consideration a measurement noise generated by transducers. The analysis of various combinations of scenarios of operation and faults with description is presented. The simulator developed may find potential application in benchmarking of diagnostic systems, testing of algorithms of operational control or can be used for supporting the modeling of real processes occurring in similar systems.

  1. Adaptive partitioning PCA model for improving fault detection and isolation☆

    Institute of Scientific and Technical Information of China (English)

    Kangling Liu; Xin Jin; Zhengshun Fei; Jun Liang

    2015-01-01

    In chemical process, a large number of measured and manipulated variables are highly correlated. Principal com-ponent analysis (PCA) is widely applied as a dimension reduction technique for capturing strong correlation un-derlying in the process measurements. However, it is difficult for PCA based fault detection results to be interpreted physical y and to provide support for isolation. Some approaches incorporating process knowledge are developed, but the information is always shortage and deficient in practice. Therefore, this work proposes an adaptive partitioning PCA algorithm entirely based on operation data. The process feature space is partitioned into several sub-feature spaces. Constructed sub-block models can not only reflect the local behavior of process change, namely to grasp the intrinsic local information underlying the process changes, but also improve the fault detection and isolation through the combination of local fault detection results and reduction of smearing effect. The method is demonstrated in TE process, and the results show that the new method is much better in fault detection and isolation compared to conventional PCA method.

  2. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin

    2016-04-06

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  3. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  4. Seismicity and fluid injections: numerical modelling of fault activation

    Science.gov (United States)

    Murphy, S.; O'Brien, G.; Bean, C.; McCloskey, J.; Nalbant, S.

    2012-04-01

    Injection of fluid into the subsurface is a common technique and is used to optimise returns from hydrocarbon plays (e.g. enhanced oil recovery, hydrofacturing of shales) and geothermal sites as well as for the sequestering carbon dioxide. While it is well understood that stress perturbations caused by fluid injections can induce/trigger earthquakes; the modelling of such hazard is still in its infancy. By combining fluid flow and seismicity simulations we have created a numerical model for investigating induced seismicity over large time periods so that we might examine the role of operational and geological factors in seismogenesis around a sub-surface fluid injection. In our model, fluid injection is simulated using pore fluid movement throughout a permeable layer from a high-pressure point source using a lattice Boltzmann scheme. We can accommodate complicated geological structures in our simulations. Seismicity is modelled using a quasi-dynamic relationship between stress and slip coupled with a rate-and state friction law. By spatially varying the frictional parameters, the model can reproduce both seismic and aseismic slip. Static stress perturbations (due to either to fault cells slipping or fluid injection) are calculated using analytical solutions for slip dislocations/pressure changes in an elastic half space. An adaptive time step is used in order to increase computational efficiency and thus allow us to model hundreds of years of seismicity. As a case study, we investigate the role that relative fault - injection location plays in seismic activity. To do this we created three synthetic catalogues with only the relative location of the fault from the point of injection varying between the models. In our control model there is no injection meaning it contains only tectonically triggered events. In the other two catalogues, the injection site is placed below and adjacent to the fault respectively. The injection itself is into a permeable thin planar layer

  5. A Test Model of Water Pressures within a Fault in Rock Slope

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces model test results of water pressure in a fault, which is located in a slope and 16 different conditions. The results show that the water pressures in fault can be expressed by a linear function, which is similar to the theoretical model suggested by Hoek. Factors affecting water pressures are water level in tension crack, dip angle of fault, the height of filling materials and thickness of fault zone in sequence.

  6. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  7. Sandbox Modeling of the Fault-increment Pattern in Extensional Basins

    Institute of Scientific and Technical Information of China (English)

    Geng Changbo; Tong Hengmao; He Yudan; Wei Chunguang

    2007-01-01

    Three series of sandbox modeling experiments were performed to study the fault-increment pattern in extensional basins.Experimental results showed that the tectonic action mode of boundaries and the shape of major boundary faults control the formation and evolution of faults in extensional basins.In the process of extensional deformation,the increase in the number and length of faults was episodic,and every 'episode' experienced three periods,strain-accumulation period,quick fault-increment period and strain-adjustment period.The more complex the shape of the boundary fault,the higher the strain increment each 'episode' experienced.Different extensional modes resulted in different fault-increment patterns.The horizontal detachment extensional mode has the 'linear' style of fault-increment pattern,while the extensional mode controlled by a listric fault has the 'stepwise' style of fault-increment pattern,and the extensional mode controlled by a ramp-flat boundary fault has the 'stepwise-linear' style of fault-increment pattern.These fault-increment patterns given above could provide a theoretical method of fault interpretation and fracture prediction in extensional basins.

  8. Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).

    Science.gov (United States)

    Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.

    2017-04-01

    The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.

  9. A new conceptual model for damage zone evolution with fault growth

    Science.gov (United States)

    de Joussineau, G.; Aydin, A.

    2006-12-01

    Faults may either impede or enhance fluid flow in the subsurface, which is relevant to a number of economic issues (hydrocarbon migration and entrapment, formation and distribution of mineral deposits) and environmental problems (movement of contaminants). Fault zones typically comprise a low-permeability core made up of intensely deformed fault rock and a high-permeability damage zone defined by fault-related fractures. The geometry, petrophysical properties and continuity of both the fault core and the damage zone have an important influence on the mechanical properties of the fault systems and on subsurface fluid flow. Information about fault components from remote seismic methods is limited and is available only for large faults (slip larger than 20-100m). It is therefore essential to characterize faults and associated damage zones in field analogues, and to develop conceptual models of how faults and related structures form and evolve. Here we present such an attempt to better understand the evolution of fault damage zones in the Jurassic Aztec Sandstone of the Valley of Fire State Park (SE Nevada). We document the formation and evolution of the damage zone associated with strike-slip faults through detailed field studies of faults of increasing slip magnitudes. The faults initiate as sheared joints with discontinuous pockets of damage zone located at fault tips and fault surface irregularities. With increasing slip (slip >5m), the damage zone becomes longer and wider by progressive fracture infilling, and is organized into two distinct components with different geometrical and statistical characteristics. The first component of the damage zone is the inner damage zone, directly flanking the fault core, with a relatively high fracture frequency and a thickness that scales with the amount of fault slip. Parts of this inner zone are integrated into the fault core by the development of the fault rock, contributing to the core's progressive widening. The second

  10. Network- and network-element-level parameters for configuration, fault, and performance management of optical networks

    Science.gov (United States)

    Drion, Christophe; Berthelon, Luc; Chambon, Olivier; Eilenberger, Gert; Peden, Francoise R.; Jourdan, Amaury

    1998-10-01

    With the high interest of network operators and manufacturers for wavelength division multiplexing (WDM) networking technology, the need for management systems adapted to this new technology keeps increasing. We investigated this topic and produced outputs through the specification of the functional architecture, network layered model, and through the development of new, TMN- based, information models for the management of optical networks and network elements. Based on these first outputs, defects in each layer together with parameters for performance management/monitoring have been identified for each type of optical network element, and each atomic function describing the element, including functions for both the transport of payload signals and of overhead information. The list of probable causes has been established for the identified defects. A second aspect consists in the definition of network-level parameters, if such photonic technology-related parameters are to be considered at this level. It is our conviction that some parameters can be taken into account at the network level for performance management, based on physical measurements within the network. Some parameters could possibly be used as criteria for configuration management, in the route calculation processes, including protection. The outputs of these specification activities are taken into account in the development of a manageable WDM network prototype which will be used as a test platform to demonstrate configuration, fault, protection and performance management in a real network, in the scope of the ACTS-MEPHISTO project. This network prototype will also be used in a larger size experiment in the context of the ACTS-PELICAN field trial (Pan-European Lightwave Core and Access Network).

  11. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After

  12. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  13. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  14. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  15. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  16. Faults simulations for three-dimensional reservoir-geomechanical models with the extended finite element method

    Science.gov (United States)

    Prévost, Jean H.; Sukumar, N.

    2016-01-01

    Faults are geological entities with thicknesses several orders of magnitude smaller than the grid blocks typically used to discretize reservoir and/or over-under-burden geological formations. Introducing faults in a complex reservoir and/or geomechanical mesh therefore poses significant meshing difficulties. In this paper, we consider the strong-coupling of solid displacement and fluid pressure in a three-dimensional poro-mechanical (reservoir-geomechanical) model. We introduce faults in the mesh without meshing them explicitly, by using the extended finite element method (X-FEM) in which the nodes whose basis function support intersects the fault are enriched within the framework of partition of unity. For the geomechanics, the fault is treated as an internal displacement discontinuity that allows slipping to occur using a Mohr-Coulomb type criterion. For the reservoir, the fault is either an internal fluid flow conduit that allows fluid flow in the fault as well as to enter/leave the fault or is a barrier to flow (sealing fault). For internal fluid flow conduits, the continuous fluid pressure approximation admits a discontinuity in its normal derivative across the fault, whereas for an impermeable fault, the pressure approximation is discontinuous across the fault. Equal-order displacement and pressure approximations are used. Two- and three-dimensional benchmark computations are presented to verify the accuracy of the approach, and simulations are presented that reveal the influence of the rate of loading on the activation of faults.

  17. Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft

    Science.gov (United States)

    Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz

    2008-01-01

    In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.

  18. Operations management system advanced automation: Fault detection isolation and recovery prototyping

    Science.gov (United States)

    Hanson, Matt

    1990-01-01

    The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.

  19. A Fault Evolution Model Including the Rupture Dynamic Simulation

    Science.gov (United States)

    Wu, Y.; Chen, X.

    2011-12-01

    tip keeps the rupture continuing easily. Therefore, comparing with the current simulation, we expect a different stress evolution after a large earthquake in a short time scale, which is very essential for the short-term prediction. Once the model is successfully constructed, we intend to apply it to the San Andreas Fault at Parkfield segment. We try to simulate the seismicity evolution and the distribution of coseismic and postseismic slip and interseismic creep in the past decades. We expect to reproduce some specific events and slip distributions.

  20. Mechanical Modeling of Near-Fault Deformation Within the Dragon's Back Pressure Ridge, San Andreas Fault, Carrizo Plain, California

    Science.gov (United States)

    Hilley, G. E.; Arrowsmith, R.

    2011-12-01

    This contribution uses field observations and numerical modeling to understand how slip along the variably oriented fault surfaces in the upper few km of the San Andreas Fault (SAF) zone produces near-fault deformation observed within a 4.5-km-long Dragon's Back Pressure Ridge (DBPR) in the Carrizo Plain, central California. Geologic and geomorphic mapping of this feature indicates that the amplitude of monoclinal warping of Quaternary sediments increases from southeast to northwest along the southwestern third of the DBPR, and remains approximately constant throughout the remaining two thirds of the landform. When viewed with other structural observations and limited near-surface magnetotelluric imaging, these geologic observations are most compatible with a scenario in which shallow offset of the SAF to the northeast creates a structural knuckle that is anchored to the North American plate. Thus, deformation accrues as right-lateral strike-slip motion along the SAF moves this obstruction along the fault plane through the DBPR block. We have used the Gale numerical model to simulate deformation expected for geometries similar to those inferred within the vicinity of the DBPR. This is accomplished by relating stresses and strains in the upper crust according to a Drucker-Prager (plastic yielding) constitutive rule. Deformation in the model is driven by applying 35 mm/yr of right-lateral strike-slip motion to the model boundary; this displacement rate is likewise applied to the base of the model. The model geometry of the SAF at the beginning of the loading was fashioned to produce the discontinuity in the geometry of the fault plane that is inferred from field observations. The friction and cohesion of crust on each side of the fault were changed between models to determine the parameter values that preserve the structural discontinuity along the SAF as finite deformation accrued. The structural discontinuity over the ~4.5 km of model displacement is maintained in

  1. Probabilistic fault localization with sliding windows

    Institute of Scientific and Technical Information of China (English)

    ZHANG Cheng; LIAO JianXin; LI TongHong; ZHU XiaoMin

    2012-01-01

    Fault localization is a central element in network fault management.This paper takes a weighted bipartite graph as a fault propagation model and presents a heuristic fault localization algorithm based on the idea of incremental coverage,which is resilient to inaccurate fault propagation model and the noisy environment.Furthermore,a sliding window mechanism is proposed to tackle the inaccuracy of this algorithm in the presence of improper time windows.As shown in the simulation study,our scheme achieves higher detection rate and lower false positive rate in the noisy environment as well as in the presence of inaccurate windows,than current fault localization algorithms.

  2. Redundancy management for efficient fault recovery in NASA's distributed computing system

    Science.gov (United States)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  3. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    Science.gov (United States)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the

  4. Fault detection and identification based on combining logic and model in a wall-climbing robot

    Institute of Scientific and Technical Information of China (English)

    Yong JIANG; Hongguang WANG; Lijin FANG; Mingyang ZHAO

    2009-01-01

    A combined logic- and model-based approach to fault detection and identification (FDI) in a suction foot control system of a wall-climbing robot is presented in this paper. For the control system, some fault models are derived by kinematics analysis. Moreover, the logic relations of the system states are known in advance. First, a fault tree is used to analyze the system by evaluating the basic events (elementary causes), which can lead to a root event (a particular fault). Then, a multiple-model adaptive estimation algorithm is used to detect and identify the model-known faults. Finally, based on the system states of the robot and the results of the estimation, the model-unknown faults are also identified using logical reasoning. Experiments show that the proposed approach based on the combination of logical reasoning and model estimating is efficient in the FDI of the robot.

  5. Analysis and implementation of power management and control strategy for six-phase multilevel ac drive system in fault condition

    Directory of Open Access Journals (Sweden)

    Sanjeevikumar Padmanaban

    2016-03-01

    Full Text Available This research article exploits the power management algorithm in post-fault conditions for a six-phase (quad multilevel inverter. The drive circuit consists of four 2-level, three-phase voltage source inverter (VSI supplying a six-phase open-end windings motor or/impedance load, with circumstantial failure of one VSI investigated. A simplified level-shifted pulse-width modulation (PWM algorithm is developed to modulate each couple of three-phase VSI as 3-level output voltage generators in normal operation. The total power of the whole ac drive is shared equally among the four isolated DC sources. The developed post-fault algorithm is applied when there is a fault by one VSI and the load is fed from the remaining three healthy VSIs. In faulty conditions the multilevel outputs are reduced from 3-level to 2-level, but still the system propagates with degraded power. Numerical simulation modelling and experimental tests have been carried out with proposed post-fault control algorithm with three-phase open-end (asymmetrical induction motor/R-L impedance load. A complete set of simulation and experimental results provided in this paper shows close agreement with the developed theoretical background.

  6. Out-of-Bounds Array Access Fault Model and Automatic Testing Method Study

    Institute of Scientific and Technical Information of China (English)

    GAO Chuanping; DUAN Miyi; TAN Liqun; GONG Yunzhan

    2007-01-01

    Out-of-bounds array access(OOB) is one of the fault models commonly employed in the objectoriented programming language. At present, the technology of code insertion and optimization is widely used in the world to detect and fix this kind of fault. Although this method can examine some of the faults in OOB programs, it cannot test programs thoroughly, neither to find the faults correctly. The way of code insertion makes the test procedures so inefficient that the test becomes costly and time-consuming. This paper, uses a kind of special static test technology to realize the fault detection in OOB programs. We first establish the fault models in OOB program, and then develop an automatic test tool to detect the faults. Some experiments have exercised and the results show that the method proposed in the paper is efficient and feasible in practical applications.

  7. Modelling in forest management

    Science.gov (United States)

    Mark J. Twery

    2004-01-01

    Forest management has traditionally been considered management of trees for timber. It really includes vegetation management and land management and people management as multiple objectives. As such, forest management is intimately linked with other topics in this volume, most especially those chapters on ecological modelling and human dimensions. The key to...

  8. Fault Estimation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems.......This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...

  9. Testing fault growth models with low-temperature thermochronology in the northwest Basin and Range, USA

    Science.gov (United States)

    Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.

    2016-10-01

    Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2-0.4 km/Myr, ultimately exhuming approximately 1.5-5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3-4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.

  10. Integrating fault and seismological data into a probabilistic seismic hazard model for Italy.

    Science.gov (United States)

    Valentini, Alessandro; Visini, Francesco; Pace, Bruno

    2017-04-01

    We present the results of new probabilistic seismic hazard analysis (PSHA) for Italy based on active fault and seismological data. Combining seismic hazard from active fault with distributed seismic sources (where there are no data on active faults) is the backbone of this work. Far away from identifying a best procedure, currently adopted approaches combine active faults and background sources applying a threshold magnitude, generally between 5.5 and 7, over which seismicity is modelled by faults, and under which is modelled by distributed sources or area sources. In our PSHA we (i) apply a new method for the treatment of geologic data of major active faults and (ii) propose a new approach to combine these data with historical seismicity to evaluate PSHA for Italy. Assuming that deformation is concentrated in correspondence of fault, we combine the earthquakes occurrences derived from the geometry and slip rates of the active faults with the earthquakes from the spatially smoothed earthquake sources. In the vicinity of an active fault, the smoothed seismic activity is gradually reduced by a fault-size driven factor. Even if the range and gross spatial distribution of expected accelerations obtained in our work are comparable to the ones obtained through methods applying seismic catalogues and classical zonation models, the main difference is in the detailed spatial pattern of our PSHA model: our model is characterized by spots of more hazardous area, in correspondence of mapped active faults, while the previous models give expected accelerations almost uniformly distributed in large regions. Finally, we investigate the impact due to the earthquake rates derived from two magnitude-frequency distribution (MFD) model for faults on the hazard result and in respect to the contribution of faults versus distributed seismic activity.

  11. Soil Moisture Active Passive Mission: Fault Management Design Analyses

    Science.gov (United States)

    Meakin, Peter; Weitl, Raquel

    2013-01-01

    As a general trend, the complexities of modern spacecraft are increasing to include more ambitious mission goals with tighter timing requirements and on-board autonomy. As a byproduct, the protective features that monitor the performance of these systems have also increased in scope and complexity. Given cost and schedule pressures, there is an increasing emphasis on understanding the behavior of the system at design time. Formal test-driven verification and validation (V&V) is rarely able to test the significant combinatorics of states, and often finds problems late in the development cycle forcing design changes that can be costly. This paper describes the approach the SMAP Fault Protection team has taken to address some of the above-mentioned issues.

  12. Markov Modeling of Component Fault Growth Over A Derived Domain of Feasible Output Control Effort Modifications

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of...

  13. Implementation of a Wind Farm Turbine Control System with Short-Term Grid Faults Management

    DEFF Research Database (Denmark)

    Marra, Francesco; Rasmussen, Tonny Wederberg; Garcia-Valle, Rodrigo

    2010-01-01

    The increased penetration of wind power in the grid has led to important technical barriers that limit the development, where the stability of the system plays a key issue. Grid operators in different countries are issuing new grid requirements, the so-called grid codes that impose more...... restrictions for the wind turbines behavior especially under grid faults. Wind turbines are requested to stay connected even during faults. These new requirements are challenging the control of the wind turbines and new control strategies are required to meet the target. This paper dealt...... with the implementation of a control strategy in order to stay connected under grid faults. The method aimed to ensure that a wind farm turbine remains connected and no electric power is delivered to the grid during the fault period. The overall system was modelled and simulated by using the software Matlab/Simulink....

  14. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  15. GPS DATA INVERSION OF KINEMATIC MODEL OF MAIN FAULTS IN YUNNAN

    Institute of Scientific and Technical Information of China (English)

    ShenChongyang; WuYun; WangQi; YouXinzhao; QiaoXuejun

    2003-01-01

    On the basis of GPS observations in Yunnan from 1999 to 2001, we adopt the robust Bayesian least square estimation and multi-fault dislocation model to analyze the quantitative kinematics models of the main faults in Yunnan. The geodetic inversion suggests that, (1) The horizontal movement of crust in Yunnan is affected distinctly by fault activity whose characters are consistent with geological results; (2) The activity of the north segment of the Red River fault zone is maximum, in the middle segment is moderate, and in the south segment is minimum; (3)Among others, the Xiaojiang fault zone has the strongest activity, the secondary are the Lancang fault zone and the north segment of Nujiang fault zone, the Qujiang fault zone shows the characteristic of hinge fault; (4)Each fault could produce an earthquake of Ms=6 more or less per year; (5) The larger value of maximum shear strain are mostly located along the main active fault zones and their intersections; earthquakes did not occur at the place of maximum shear strain, and mostly take place at the higher gradient zones, especially at its corner.

  16. GPS DATA INVERSION OF KINEMATIC MODEL OF MAIN FAULTS IN YUNNAN

    Institute of Scientific and Technical Information of China (English)

    Shen Chongyang; Wu Yun; Wang Qi; You Xinzhao; Qiao Xuejun

    2003-01-01

    On the basis of GPS observations in Yunnan from 1999 to 2001, we adopt the robust Bayesian least square estimation and multi-fault dislocation model to analyze the quantitative kinematics models of the main faults in Yunnan. The geodetic inversion suggests that: (1) The horizontal movement of crust in Yunnan is affected distinctly by fault activity whose characters are consistent with geological results; (2) The activity of the north segment of the Red River fault zone is maximum, in the middle segment is moderate, and in the south segment is minimum; (3)Among others, the Xiaojiang fault zone has the strongest activity, the secondary are the Lancang fault zone and the north segment of Nujiang fault zone, the Qujiang fault zone shows the characteristic of hinge fault; (4)Each fault could produce an earthquake of Ms=6 more or less per year; (5) The larger value of maximum shear strain are mostly located along the main active fault zones and their intersections; earthquakes did not occur at the place of maximum shear strain, and mostly take place at the higher gradient zones, especially at its corner.

  17. An approach to 3D NURBS modeling of complex fault network considering its historic tectonics

    Institute of Scientific and Technical Information of China (English)

    ZHONG Denghua; LIU Jie; LI Mingchao

    2006-01-01

    Fault disposal is a research area that presents difficulties in 3D geological modeling and visualization. In this paper, we propose an integrated approach to reconstructing a complex fault network (CFN). Based on the non-uniform rational B-spline (NURBS)techniques, fault surface was constructed, reflecting the regulation of its spatial tendency, and correlative surfaces were enclosed to form a fault body model. Based on these models and considering their historic tectonics, a method was put forward to settle the 3D modeling problem when the intersection of two faults in CFN induced the change of their relative positions. First, according to the relationships of intersection obtained from geological interpretation, we introduced the topological sort to determine the order of fault body construction and rebuilt fault bodies in terms of the order; then, with the disposal method of two intersectant faults in 3D modeling and applying the Boolean operation, we investigated the characteristic of faults at the intersectant part. An example of its application in hydropower engineering project was proposed. Its results show that this modeling approach can increase the computing efficiency while less computer memory is required, and it can also factually and objectively reproduce the CFN in the engineering region, which establishes a theoretical basis for 3D modeling and analysis of complex engineering geology.

  18. Surveillance system and method having an operating mode partitioned fault classification model

    Science.gov (United States)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  19. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...... diagnosis methods, often viewed as the classical or deterministic ones. Emphasis is placed on the algorithms suitable for ship automation, unmanned underwater vehicles, and other systems of automatic control....

  20. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Mittempergher, Silvia; Vho, Alice; Bistacchi, Andrea

    2016-04-01

    A quantitative analysis of fault-rock distribution in outcrops of exhumed fault zones is of fundamental importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation. We present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM), developed on the Gole Larghe Fault Zone (GLFZ), a well exposed strike-slip fault in the Adamello batholith (Italian Southern Alps). The GLFZ has been exhumed from ca. 8-10 km depth, and consists of hundreds of individual seismogenic slip surfaces lined by green cataclasites (crushed wall rocks cemented by the hydrothermal epidote and K-feldspar) and black pseudotachylytes (solidified frictional melts, considered as a marker for seismic slip). A digital model of selected outcrop exposures was reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs processed with VisualSFM software. The resulting DOM has a resolution up to 0.2 mm/pixel. Most of the outcrop was imaged using images each one covering a 1 x 1 m2 area, while selected structural features, such as sidewall ripouts or stepovers, were covered with higher-resolution images covering 30 x 40 cm2 areas.Image processing algorithms were preliminarily tested using the ImageJ-Fiji package, then a workflow in Matlab was developed to process a large collection of images sequentially. Particularly in detailed 30 x 40 cm images, cataclasites and hydrothermal veins were successfully identified using spectral analysis in RGB and HSV color spaces. This allows mapping the network of cataclasites and veins which provided the pathway for hydrothermal fluid circulation, and also the volume of mineralization, since we are able to measure the thickness of cataclasites and veins on the outcrop surface. The spectral signature of pseudotachylyte veins is indistinguishable from that of biotite grains in the wall rock (tonalite), so we tested morphological analysis tools to discriminate

  1. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  2. Nearly frictionless faulting by unclamping in long-term interaction models

    Science.gov (United States)

    Parsons, T.

    2002-01-01

    In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.

  3. 3D Spontaneous Rupture Models of Large Earthquakes on the Hayward Fault, California

    Science.gov (United States)

    Barall, M.; Harris, R. A.; Simpson, R. W.

    2008-12-01

    We are constructing 3D spontaneous rupture computer simulations of large earthquakes on the Hayward and central Calaveras faults. The Hayward fault has a geologic history of producing many large earthquakes (Lienkaemper and Williams, 2007), with its most recent large event a M6.8 earthquake in 1868. Future large earthquakes on the Hayward fault are not only possible, but probable (WGCEP, 2008). Our numerical simulation efforts use information about the complex 3D fault geometry of the Hayward and Calaveras faults and information about the geology and physical properties of the rocks that surround the Hayward and Calaveras faults (Graymer et al., 2005). Initial stresses on the fault surface are inferred from geodetic observations (Schmidt et al., 2005), seismological studies (Hardebeck and Aron, 2008), and from rate-and- state simulations of the interseismic interval (Stuart et al., 2008). In addition, friction properties on the fault surface are inferred from laboratory measurements of adjacent rock types (Morrow et al., 2008). We incorporate these details into forward 3D computer simulations of dynamic rupture propagation, using the FaultMod finite-element code (Barall, 2008). The 3D fault geometry is constructed using a mesh-morphing technique, which starts with a vertical planar fault and then distorts the entire mesh to produce the desired fault geometry. We also employ a grid-doubling technique to create a variable-resolution mesh, with the smallest elements located in a thin layer surrounding the fault surface, which provides the higher resolution needed to model the frictional behavior of the fault. Our goals are to constrain estimates of the lateral and depth extent of future large Hayward earthquakes, and to explore how the behavior of large earthquakes may be affected by interseismic stress accumulation and aseismic slip.

  4. A fault tolerant model for multi-sensor measurement

    Directory of Open Access Journals (Sweden)

    Li Liang

    2015-06-01

    Full Text Available Multi-sensor systems are very powerful in the complex environments. The cointegration theory and the vector error correction model, the statistic methods which widely applied in economic analysis, are utilized to create a fitting model for homogeneous sensors measurements. An algorithm is applied to implement the model for error correction, in which the signal of any sensor can be estimated from those of others. The model divides a signal series into two parts, the training part and the estimated part. By comparing the estimated part with the actual one, the proposed method can identify a sensor with possible faults and repair its signal. With a small amount of training data, the right parameters for the model in real time could be found by the algorithm. When applied in data analysis for aero engine testing, the model works well. Therefore, it is not only an effective method to detect any sensor failure or abnormality, but also a useful approach to correct possible errors.

  5. Th gme 05: Modeling of fault reactivation and fault slip in producing gas fields

    NARCIS (Netherlands)

    Wassing, B.B.T.

    2015-01-01

    Current methods which are used for seismic hazard analyses of production induced seismicity in The Netherlands are generally based on either empirical relations which link compaction strain and seismic release or simple relations between available fault area and seismic moment release. Physics based

  6. Certain Type Turbofan Engine Whole Vibration Model with Support Looseness Fault and Casing Response Characteristics

    Directory of Open Access Journals (Sweden)

    H. F. Wang

    2014-01-01

    Full Text Available Support looseness fault is a type of common fault in aeroengine. Serious looseness fault would emerge under larger unbalanced force, which would cause excessive vibration and even lead to rubbing fault, so it is important to analyze and recognize looseness fault effectively. In this paper, based on certain type turbofan engine structural features, a rotor-support-casing whole model for certain type turbofan aeroengine is established. The rotor and casing systems are modeled by means of the finite element beam method; the support systems are modeled by lumped-mass model; the support looseness fault model is also introduced. The coupled system response is obtained by numerical integral method. In this paper, based on the casing acceleration signals, the impact characteristics of symmetrical stiffness and asymmetric stiffness models are analyzed, finding that the looseness fault would lead to the longitudinal asymmetrical characteristics of acceleration time domain wave and the multiple frequency characteristics, which is consistent with the real trial running vibration signals. Asymmetric stiffness looseness model is verified to be fit for aeroengine looseness fault model.

  7. A way to synchronize models with seismic faults for earthquake forecasting

    DEFF Research Database (Denmark)

    González, Á.; Gómez, J.B.; Vázquez-Prada, M.

    2006-01-01

    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual f...

  8. Waste Management facilities fault tree databank 1995 status report

    Energy Technology Data Exchange (ETDEWEB)

    Minnick, W.V.; Wellmaker, K.A.

    1995-08-16

    The Safety Information Management and Analysis Group (SIMA) of the Safety Engineering Department (SED) maintains compilations of incidents that have occurred in the Separations and Process Control, Waste Management, Fuel Fabrication, Tritium and SRTC facilities. This report records the status of the Waste Management (WM) Databank at the end of CY-1994. The WM Databank contains more than 35,000 entries ranging from minor equipment malfunctions to incidents with significant potential for injury or contamination of personnel. This report documents the status of the WM Databank including the availability, training, sources of data, search options, Quality Assurance, and usage to which these data have been applied. Periodic updates to this memorandum are planned as additional data or applications are acquired.

  9. Fault detection and diagnosis in a food pasteurization process with Hidden Markov Models

    OpenAIRE

    Tokatlı, Figen; Cinar, Ali

    2004-01-01

    Hidden Markov Models (HMM) are used to detect abnormal operation of dynamic processes and diagnose sensor and actuator faults. The method is illustrated by monitoring the operation of a pasteurization plant and diagnosing causes of abnormal operation. Process data collected under the influence of faults of different magnitude and duration in sensors and actuators are used to illustrate the use of HMM in the detection and diagnosis of process faults. Case studies with experimental data from a ...

  10. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  11. A distributed model predictive control (MPC) fault reconfiguration strategy for formation flying satellites

    Science.gov (United States)

    Esfahani, N. R.; Khorasani, K.

    2016-05-01

    In this paper, an active distributed (also referred to as semi-decentralised) fault recovery control scheme is proposed that employs inaccurate and unreliable fault information into a model-predictive-control-based design. The objective is to compensate for the identified actuator faults that are subject to uncertainties and detection time delays, in the attitude control subsystems of formation flying satellites. The proposed distributed fault recovery scheme is developed through a two-level hierarchical framework. In the first level, or the agent level, the fault is recovered locally to maintain as much as possible the design specifications, feasibility, and tracking performance of all the agents. In the second level, or the formation level, the recovery is carried out by enhancing the entire team performance. The fault recovery performance of our proposed distributed (semi-decentralised) scheme is compared with two other alternative schemes, namely the centralised and the decentralised fault recovery schemes. It is shown that the distributed (semi-decentralised) fault recovery scheme satisfies the recovery design specifications and also imposes lower fault compensation control effort cost and communication bandwidth requirements as compared to the centralised scheme. Our proposed distributed (semi-decentralised) scheme also outperforms the achievable performance capabilities of the decentralised scheme. Simulation results corresponding to a network of four precision formation flight satellites are also provided to demonstrate and illustrate the advantages of our proposed distributed (semi-decentralised) fault recovery strategy.

  12. Rheology and friction along the Vema transform fault (Central Atlantic) inferred by thermal modeling

    Science.gov (United States)

    Cuffaro, Marco; Ligi, Marco

    2016-04-01

    We investigate with 3-D finite element simulations the temperature distribution beneath the Vema transform that offsets the Mid-Atlantic Ridge by ~300 km in the Central Atlantic. The developed thermal model includes the effects of mantle flow beneath a ridge-transform-ridge geometry and the lateral heat conduction across the transform fault, and of the shear heating generated along the fault. Numerical solutions are presented for a 3-D domain, discretized with a non-uniform tetrahedral mesh, where relative plate kinematics is used as boundary condition, providing passive mantle upwelling. Mantle is modelled as a temperature-dependent viscous fluid, and its dynamics can be described by Stokes and advection-conduction heat equations. The results show that shear heating raises significantly the temperature along the transform fault. In order to test model results, we calculated the thermal structure simulating the mantle dynamics beneath an accretionary plate boundary geometry that duplicates the Vema transform fault, assuming the present-day spreading rate and direction of the Mid Atlantic Ridge at 11 °N. Thus, the modelled heat flow at the surface has been compared with 23 heat flow measurements carried out along the Vema Transform valley. Laboratory studies on the frictional stability of olivine aggregates show that the depth extent of oceanic faulting is thermally controlled and limited by the 600 °C isotherm. The depth of isotherms of the thermal model were compared to the depths of earthquakes along transform faults. Slip on oceanic transform faults is primarily aseismic, only 15% of the tectonic offset is accommodated by earthquakes. Despite extensive fault areas, few large earthquakes occur on the fault and few aftershocks follow large events. Rheology constrained by the thermal model combined with geology and seismicity of the Vema Transform fault allows to better understand friction and the spatial distribution of strength along the fault and provides

  13. A way to synchronize models with seismic faults for earthquake forecasting: Insights from a simple stochastic model

    CERN Document Server

    González, A; Gómez, J B; Pacheco, A F; Gonzalez, Alvaro; Vazquez-Prada, Miguel; Gomez, Javier B.; Pacheco, Amalio F.

    2005-01-01

    Numerical models of seismic faults are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual fault or fault network it simulates (just as, for example, meteorologists synchronize their models with the atmosphere by incorporating current atmospheric data in them). However, lithospheric dynamics is largely unobservable: important parameters cannot (or can rarely) be measured in Nature. Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the accurate synchronization of the models. The rupture area is one of the measurable parameters of actual earthquakes. Here we explore how this can be used to at least synchronize fault models between themselves and forecast synthetic earthquakes. Our purpose here is to forec...

  14. Diesel Engine Actuator Fault Isolation using Multiple Models Hypothesis Tests

    DEFF Research Database (Denmark)

    Bøgh, S.A.

    1994-01-01

    Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic......Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic...

  15. UML Statechart Fault Tree Generation By Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Herbert-Hansen, Zaza Nadja Lee

    Creating fault tolerant and efficient process work-flows poses a significant challenge. Individual faults, defined as an abnormal conditions or defects in a component, equipment, or sub-process, must be handled so that the system may continue to operate, and are typically addressed by implementin...

  16. A Complete Analytic Model for Fault Diagnosis of Power Systems

    Institute of Scientific and Technical Information of China (English)

    LIU Daobing; GU Xueping; LI Haipeng

    2011-01-01

    Interconnections of the modem bulk electric power systems, while contributing to the operating economy and reliability by means of mutual assistance between the subsystems, result in an increased complexity of fault diagnosis and a more serious consequence of misdiagnosis. The online fault diagnosis has become a more challenging problem for dispatchers to operate a power system securely,

  17. Fault detection in processes represented by PLS models using an EWMA control scheme

    KAUST Repository

    Harrou, Fouzi

    2016-10-20

    Fault detection is important for effective and safe process operation. Partial least squares (PLS) has been used successfully in fault detection for multivariate processes with highly correlated variables. However, the conventional PLS-based detection metrics, such as the Hotelling\\'s T and the Q statistics are not well suited to detect small faults because they only use information about the process in the most recent observation. Exponentially weighed moving average (EWMA), however, has been shown to be more sensitive to small shifts in the mean of process variables. In this paper, a PLS-based EWMA fault detection method is proposed for monitoring processes represented by PLS models. The performance of the proposed method is compared with that of the traditional PLS-based fault detection method through a simulated example involving various fault scenarios that could be encountered in real processes. The simulation results clearly show the effectiveness of the proposed method over the conventional PLS method.

  18. Application of H-Infinity Fault Detection to Model-Scale Autonomous Aircraft

    Science.gov (United States)

    Vasconcelos, J. F.; Rosa, P.; Kerr, Murray; Latorre Sierra, Antonio; Recupero, Cristina; Hernandez, Lucia

    2015-09-01

    This paper describes the development of a fault detection system for a model scale autonomous aircraft. The considered fault scenario is defined by malfunctions in the elevator, namely bias and stuck-in-place of the surface. The H∞ design methodology is adopted, with an LFT description of the aircraft longitudinal dynamics, that allows for fault detection explicitly synthesized for a wide range of operating airspeeds. The obtained filter is validated in two stages: in a Functional Engineering Simulator (FES), providing preliminary results of the filter performance; and with experimental data, collected in field tests with actual injection of faults in the elevator surface.

  19. Modeling and Fault Monitoring of Bioprocess Using Generalized Additive Models (GAMs) and Bootstrap%Modeling and Fault Monitoring of Bioprocess Using Generalized Additive Models (GAMs) and Bootstrap

    Institute of Scientific and Technical Information of China (English)

    郑蓉建; 周林成; 潘丰

    2012-01-01

    Fault monitoring of bioprocess is important to ensure safety of a reactor and maintain high quality of products. It is difficult to build an accurate mechanistic model for a bioprocess, so fault monitoring based on rich historical or online database is an effective way. A group of data based on bootstrap method could be resampling stochastically, improving generalization capability of model. In this paper, online fault monitoring of generalized additive models (GAMs) combining with bootstrap is proposed for glutamate fermentation process. GAMs and bootstrap are first used to decide confidence interval based on the online and off-line normal sampled data from glutamate fermentation experiments. Then GAMs are used to online fault monitoring for time, dissolved oxygen, oxygen uptake rate, and carbon dioxide evolution rate. The method can provide accurate fault alarm online and is helpful to provide useful information for removing fault and abnormal phenomena in the fermentation.

  20. Combining field observations and numerical modeling to better understand fault opening and hydromechanics at depth

    Science.gov (United States)

    Ritz, E.; Pollard, D. D.

    2012-12-01

    This study adds field observations and numerical modeling results to the mounting evidence that fault surface irregularities cause local variations in slip, opening, and stress distributions along faults. A two-dimensional displacement discontinuity boundary element model (DDM) in conjunction with a complementarity algorithm is used to model both idealized and natural fault geometries in order to predict the locations and magnitudes of fault opening, and the style and spatial distribution of off-fault damage, both of which influence local fluid flow. Field observations of exhumed small faults in granodiorite from the central Sierra Nevada, California, help to constrain the numerical models. The Sierran faults exhibit sections of opening that became conduits for fluid flow and void spaces for precipitation of hydrothermal minerals; these sections are often surrounded by fractured and altered wall rock, presumably due to local stress concentrations and the influx of chemically reactive fluids. We are further developing the DDM with complementarity to add internal fluid pressure or normal cohesion along the fault surfaces, which are assigned independently of other contact properties, such as the frictional strength and coefficient of friction. While variable frictional strength or internal normal stress along a planar fault may produce opening or perturb the local stress state, these boundary conditions do not accurately mimic the mechanical behavior of faults with non-planar geometries. We advocate using the nomenclature 'lee' and 'stoss' to describe curved faults rather than 'releasing' and 'restraining bends', because the implied mechanical conditions are not necessarily met. Numerical experiments for idealized curved model faults demonstrate that fault opening can occur along lee sides of the curves, with the enhancing effects of fluid pressure and despite the countervailing effects of increased confining pressure with depth. Ambient effective compressive stresses

  1. Degradation Assessment and Fault Diagnosis for Roller Bearing Based on AR Model and Fuzzy Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Lingli Jiang

    2011-01-01

    Full Text Available This paper proposes a new approach combining autoregressive (AR model and fuzzy cluster analysis for bearing fault diagnosis and degradation assessment. AR model is an effective approach to extract the fault feature, and is generally applied to stationary signals. However, the fault vibration signals of a roller bearing are non-stationary and non-Gaussian. Aiming at this problem, the set of parameters of the AR model is estimated based on higher-order cumulants. Consequently, the AR parameters are taken as the feature vectors, and fuzzy cluster analysis is applied to perform classification and pattern recognition. Experiments analysis results show that the proposed method can be used to identify various types and severities of fault bearings. This study is significant for non-stationary and non-Gaussian signal analysis, fault diagnosis and degradation assessment.

  2. Real-Time Fault Contingency Management for Integrated Vehicle Health Management Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Impact Technologies, with support from the Georgia Institute of Technology and Honeywell, propose to develop and demonstrate a suite of real-time Fault Contingency...

  3. Analytical Model and Algorithm of Fuzzy Fault Tree

    Institute of Scientific and Technical Information of China (English)

    杨艺; 何学秋; 王恩元; 刘贞堂

    2002-01-01

    In the past, the probabilities of basic events were described as triangular or trapezoidal fuzzy number that cannot characterize the common distribution of the primary events in engineering, and the fault tree analyzed by fuzzy set theory did not include repeated basic events. This paper presents a new method to a nalyze the fault tree by using normal fuzzy number to describe the fuzzy probability of each basic event which is more suitably used to analyze the reliability in safety systems, and then the formulae of computing the fuzzy probability of the top event of the fault tree which includes repeated events are derived. Finally, an example is given.

  4. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    Efficient fault detection in generators often require prior knowledge of fault behavior, which can be obtained from theoretical analysis, often carried out by using discrete models of a given generator. Mathematical models are commonly represented in the DQ0 reference frame, which is convenient...... in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...... as undesired spectral components, which can be detected by applying frequency spectrum analysis....

  5. Fault creep and strain partitioning in Trinidad-Tobago: Geodetic measurements, models, and origin of creep

    Science.gov (United States)

    Geirsson, Halldór; Weber, John; La Femina, Peter; Latchman, Joan L.; Robertson, Richard; Higgins, Machel; Miller, Keith; Churches, Chris; Shaw, Kenton

    2017-04-01

    We studied active faults in Trinidad and Tobago in the Caribbean-South American (CA-SA) transform plate boundary zone using episodic GPS (eGPS) data from 19 sites and continuous GPS (cGPS) data from 8 sites, then modeling these data using a series of simple screw dislocation models. Our best-fit model for interseismic fault slip requires: 12-15 mm/yr of right-lateral movement and very shallow locking (0.2 ± 0.2 km; essentially creep) across the Central Range Fault (CRF); 3.4 +0.3/-0.2 mm/yr across the Soldado Fault in south Trinidad, and 3.5 +0.3/-0.2 mm/yr of dextral shear on fault(s) between Trinidad and Tobago. The upper-crustal faults in Trinidad show very little seismicity (1954-current from local network) and do not appear to have generated significant historic earthquakes. However, paleoseismic studies indicate that the CRF ruptured between 2710 and 500 yr. B.P. and thus it was recently capable of storing elastic strain. Together, these data suggest spatial and/or temporal fault segmentation on the CRF. The CRF marks a physical boundary between rocks associated with thermogenically generated petroleum and overpressured fluids in south and central Trinidad, from rocks containing only biogenic gas to the north, and a long string of active mud volcanoes align with the trace of the Soldado Fault along Trinidad's south coast. Fluid (oil and gas) overpressure may thus cause the CRF fault creep that we observe and the lack of seismicity, as an alternative or addition to weak mineral phases on the fault.

  6. Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants

    Science.gov (United States)

    Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.

    1998-01-01

    Control of air contaminants is a crucial factor in the safety considerations of crewed space flight. Indoor air quality needs to be closely monitored during long range missions such as a Mars mission, and also on large complex space structures such as the International Space Station. This work mainly pertains to the detection and simulation of air contaminants in the space station, though much of the work is easily extended to buildings, and issues of ventilation systems. Here we propose a method with which to track the presence of contaminants using an accurate physical model, and also develop a robust procedure that would raise alarms when certain tolerance levels are exceeded. A part of this research concerns the modeling of air flow inside a spacecraft, and the consequent dispersal pattern of contaminants. Our objective is to also monitor the contaminants on-line, so we develop a state estimation procedure that makes use of the measurements from a sensor system and determines an optimal estimate of the contamination in the system as a function of time and space. The real-time optimal estimates in turn are used to detect faults in the system and also offer diagnoses as to their sources. This work is concerned with the monitoring of air contaminants aboard future generation spacecraft and seeks to satisfy NASA's requirements as outlined in their Strategic Plan document (Technology Development Requirements, 1996).

  7. Modeling and Fault Diagnosis of Interturn Short Circuit for Five-Phase Permanent Magnet Synchronous Motor

    Directory of Open Access Journals (Sweden)

    Jian-wei Yang

    2015-01-01

    Full Text Available Taking advantage of the high reliability, multiphase permanent magnet synchronous motors (PMSMs, such as five-phase PMSM and six-phase PMSM, are widely used in fault-tolerant control applications. And one of the important fault-tolerant control problems is fault diagnosis. In most existing literatures, the fault diagnosis problem focuses on the three-phase PMSM. In this paper, compared to the most existing fault diagnosis approaches, a fault diagnosis method for Interturn short circuit (ITSC fault of five-phase PMSM based on the trust region algorithm is presented. This paper has two contributions. (1 Analyzing the physical parameters of the motor, such as resistances and inductances, a novel mathematic model for ITSC fault of five-phase PMSM is established. (2 Introducing an object function related to the Interturn short circuit ratio, the fault parameters identification problem is reformulated as the extreme seeking problem. A trust region algorithm based parameter estimation method is proposed for tracking the actual Interturn short circuit ratio. The simulation and experimental results have validated the effectiveness of the proposed parameter estimation method.

  8. Real-Time Risk and Fault Management in the Mission Evaluation Room of the International Space Station

    Energy Technology Data Exchange (ETDEWEB)

    William R. Nelson; Steven D. Novack

    2003-05-01

    Effective anomaly resolution in the Mission Evaluation Room (MER) of the International Space Station (ISS) requires consideration of risk in the process of identifying faults and developing corrective actions. Risk models such as fault trees from the ISS Probablistic Risk Assessment (PRA) can be used to support anomaly resolution, but the functionality required goes significantly beyond what the PRA could provide. Methods and tools are needed that can systematically guide the identification of root causes for on-orbit anomalies, and to develop effective corrective actions that address the event and its consequences without undue risk to the crew or the mission. In addition, an overall information management framework is needed so that risk can be systematically incorporated in the process, and effectively communicated across all the disciplines and levels of management within the space station program. The commercial nuclear power industry developed such a decision making framework, known as the critical safety function approach, to guide emergency response following the accident at Three Mile Island in 1979. This report identifies new methods, tools, and decision processes that can be used to enhance anomaly resolution in the ISS Mission Evaluation Room. Current anomaly resolution processes were reviewed to identify requirements for effective real-time risk and fault management. Experience gained in other domains, especially the commercial nuclear power industry, was reviewed to identify applicable methods and tools. Recommendations were developed for next-generation tools to support MER anomaly resolution, and a plan for implementing the recommendations was formulated. The foundation of the proposed toolset will be a "Mission Success Framework" designed to integrate and guide the anomaly resolution process, and to facilitate consistent communication across disciplines while focusing on the overriding importance of mission success.

  9. Real-Time Risk and Fault Management in the Mission Evaluation Room for the International Space Station

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.; Novack, S.D.

    2003-05-30

    Effective anomaly resolution in the Mission Evaluation Room (MER) of the International Space Station (ISS) requires consideration of risk in the process of identifying faults and developing corrective actions. Risk models such as fault trees from the ISS Probabilistic Risk Assessment (PRA) can be used to support anomaly resolution, but the functionality required goes significantly beyond what the PRA could provide. Methods and tools are needed that can systematically guide the identification of root causes for on-orbit anomalies, and to develop effective corrective actions that address the event and its consequences without undue risk to the crew or the mission. In addition, an overall information management framework is needed so that risk can be systematically incorporated in the process, and effectively communicated across all the disciplines and levels of management within the space station program. The commercial nuclear power industry developed such a decision making framework, known as the critical safety function approach, to guide emergency response following the accident at Three Mile Island in 1979. This report identifies new methods, tools, and decision processes that can be used to enhance anomaly resolution in the ISS Mission Evaluation Room. Current anomaly resolution processes were reviewed to identify requirements for effective real-time risk and fault management. Experience gained in other domains, especially the commercial nuclear power industry, was reviewed to identify applicable methods and tools. Recommendations were developed for next-generation tools to support MER anomaly resolution, and a plan for implementing the recommendations was formulated. The foundation of the proposed tool set will be a ''Mission Success Framework'' designed to integrate and guide the anomaly resolution process, and to facilitate consistent communication across disciplines while focusing on the overriding importance of mission success.

  10. Physically-based modeling of speed sensors for fault diagnosis and fault tolerant control in wind turbines

    Science.gov (United States)

    Weber, Wolfgang; Jungjohann, Jonas; Schulte, Horst

    2014-12-01

    In this paper, a generic physically-based modeling framework for encoder type speed sensors is derived. The consideration takes into account the nominal fault-free and two most relevant fault cases. The advantage of this approach is a reconstruction of the output waveforms in dependence of the internal physical parameter changes which enables a more accurate diagnosis and identification of faulty incremental encoders i.a. in wind turbines. The objectives are to describe the effect of the tilt and eccentric of the encoder disk on the digital output signals and the influence of the accuracy of the speed measurement in wind turbines. Simulation results show the applicability and effectiveness of the proposed approach.

  11. Robust recurrent neural network modeling for software fault detection and correction prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Q.P. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: g0305835@nus.edu.sg; Xie, M. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Ng, S.H. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: isensh@nus.edu.sg; Levitin, G. [Israel Electric Corporation, Reliability and Equipment Department, R and D Division, Aaifa 31000 (Israel)]. E-mail: levitin@iec.co.il

    2007-03-15

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.

  12. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    Science.gov (United States)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of Abort Triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of Abort Triggers.

  13. Model based fault diagnosis in a centrifugal pump application using structural analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik;

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are use...... it to an industrial benchmark. The benchmark tests have shown that the algorithm is capable of detection and isolation of five different faults in the mechanical and hydraulic parts of the pump.......A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  14. Model Based Fault Diagnosis in a Centrifugal Pump Application using Structural Analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik;

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are use...... it to an industrial benchmark. The benchmark tests have shown that the algorithm is capable of detection and isolation of five different faults in the mechanical and hydraulic parts of the pump.......A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  15. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  16. The influence of fault geometry and frictional contact properties on slip surface behavior and off-fault damage: insights from quasi-static modeling of small strike-slip faults from the Sierra Nevada, CA

    Science.gov (United States)

    Ritz, E.; Pollard, D. D.

    2011-12-01

    Geological and geophysical investigations demonstrate that faults are geometrically complex structures, and that the nature and intensity of off-fault damage is spatially correlated with geometric irregularities of the slip surfaces. Geologic observations of exhumed meter-scale strike-slip faults in the Bear Creek drainage, central Sierra Nevada, CA, provide insight into the relationship between non-planar fault geometry and frictional slip at depth. We investigate natural fault geometries in an otherwise homogeneous and isotropic elastic material with a two-dimensional displacement discontinuity method (DDM). Although the DDM is a powerful tool, frictional contact problems are beyond the scope of the elementary implementation because it allows interpenetration of the crack surfaces. By incorporating a complementarity algorithm, we are able to enforce appropriate contact boundary conditions along the model faults and include variable friction and frictional strength. This tool allows us to model quasi-static slip on non-planar faults and the resulting deformation of the surrounding rock. Both field observations and numerical investigations indicate that sliding along geometrically discontinuous or irregular faults may lead to opening of the fault and the formation of new fractures, affecting permeability in the nearby rock mass and consequently impacting pore fluid pressure. Numerical simulations of natural fault geometries provide local stress fields that are correlated to the style and spatial distribution of off-fault damage. We also show how varying the friction and frictional strength along the model faults affects slip surface behavior and consequently influences the stress distributions in the adjacent material.

  17. Observer-based and Regression Model-based Detection of Emerging Faults in Coal Mills

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Lin, Bao; Jørgensen, Sten Bay

    2006-01-01

    In order to improve the reliability of power plants it is important to detect fault as fast as possible. Doing this it is interesting to find the most efficient method. Since modeling of large scale systems is time consuming it is interesting to compare a model-based method with data driven ones....... In this paper three different fault detection approaches are compared using a example of a coal mill, where a fault emerges. The compared methods are based on: an optimal unknown input observer, static and dynamic regression model-based detections. The conclusion on the comparison is that observer-based scheme...

  18. An approach to secure weather and climate models against hardware faults

    Science.gov (United States)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  19. A Study on Landslide Risk Management by Applying Fault Tree Logics

    Directory of Open Access Journals (Sweden)

    Kazmi Danish

    2017-01-01

    Full Text Available Slope stability is one of the focal areas of curiosity to geotechnical designers and also appears logical for the application of probabilistic approaches since the analysis lead to a “probability of failure”. Assessment of the existing slopes in relation with risks seems to be more meaningful when concerning with landslides. Probabilistic slope stability analysis (PSSA is the best option in covering the landslides events. The intent here is to bid a probabilistic framework for quantified risk analysis with human uncertainties. In this regard, Fault Tree Analysis is utilized and for prediction of risk levels, consequences of the failures of the reference landslides have been taken. It is concluded that logics of fault trees is best fit, to clinch additional categories of uncertainty; like human, organizational, and knowledge related. In actual, the approach has been used in bringing together engineering and management performances and personnel, to produce reliability in slope engineering practices.

  20. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  1. Fault Tolerant Controller Design for a Faulty UAV Using Fuzzy Modeling Approach

    Directory of Open Access Journals (Sweden)

    Moshu Qian

    2016-01-01

    Full Text Available We address a fault tolerant control (FTC issue about an unmanned aerial vehicle (UAV under possible simultaneous actuator saturation and faults occurrence. Firstly, the Takagi-Sugeno fuzzy models representing nonlinear flight control systems (FCS for an UAV with unknown disturbances and actuator saturation are established. Then, a normal H-infinity tracking controller is presented using an online estimator, which is introduced to weaken the saturation effect. Based on the normal tracking controller, we propose an adaptive fault tolerant tracking controller (FTTC to solve actuator loss of effectiveness (LOE fault problem. Compared with previous work, this approach developed in our research need not rely on any fault diagnosis unit and is easily applied in engineering. Finally, these results in simulation indicate the efficiency of our presented FTC scheme.

  2. GNAR-GARCH model and its application in feature extraction for rolling bearing fault diagnosis

    Science.gov (United States)

    Ma, Jiaxin; Xu, Feiyun; Huang, Kai; Huang, Ren

    2017-09-01

    Given its simplicity of modeling and sensitivity to condition variations, time series model is widely used in feature extraction to realize fault classification and diagnosis. However, nonlinear and nonstationary characteristics common in fault signals of rolling bearing bring challenges to the diagnosis. In this paper, a hybrid model, the combination of a general expression for linear and nonlinear autoregressive (GNAR) model and a generalized autoregressive conditional heteroscedasticity (GARCH) model, (i.e., GNAR-GARCH), is proposed and applied to rolling bearing fault diagnosis. An exact expression of GNAR-GARCH model is given. Maximum likelihood method is used for parameter estimation and modified Akaike Information Criterion is adopted for structure identification of GNAR-GARCH model. The main advantage of this novel model over other models is that the combination makes the model suitable for nonlinear and nonstationary signals. It is verified with statistical tests that contain comparisons among the different time series models. Finally, GNAR-GARCH model is applied to fault diagnosis by modeling mechanical vibration signals including simulation and real data. With the parameters estimated and taken as feature vectors, k-nearest neighbor algorithm is utilized to realize the classification of fault status. The results show that GNAR-GARCH model exhibits higher accuracy and better performance than do other models.

  3. Faulting and block rotation in the Afar triangle, East Africa: The Danakil "crank-arm" model

    Science.gov (United States)

    Souriot, T.; Brun, J.-P.

    1992-10-01

    Several domains of contrasted extensional deformation have been identified in the southern Afar triangle (East Africa) from fault patterns analyzed with panchromatic stereoscopic SPOT (Système Probatoire d'Observation de la Terre) images. Stretching directions and statistical orientation and offset variations of faults fit with the Danakii "crank-arm" model of Sichler: A 10° sinistral rotation of the Danakil block explains the fault geometry and dextral block rotation in the southern part of the Afar triangle, as well as the oblique extension in the Tadjoura Gulf. Analogue modeling supports this interpretation.

  4. Spectral element modelling of fault-plane reflections arising from fluid pressure distributions

    Science.gov (United States)

    Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.

    2007-01-01

    The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model

  5. Modeling of fault reactivation and induced seismicity during hydraulic fracturing of shale-gas reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric; Moridis, George J.

    2013-07-01

    We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned towards conditions usually encountered in the Marcellus shale play in the Northeastern US at an approximate depth of 1500 m (~;;4,500 feet). Our modeling simulations indicate that when faults are present, micro-seismic events are possible, the magnitude of which is somewhat larger than the one associated with micro-seismic events originating from regular hydraulic fracturing because of the larger surface area that is available for rupture. The results of our simulations indicated fault rupture lengths of about 10 to 20 m, which, in rare cases can extend to over 100 m, depending on the fault permeability, the in situ stress field, and the fault strength properties. In addition to a single event rupture length of 10 to 20 m, repeated events and aseismic slip amounted to a total rupture length of 50 m, along with a shear offset displacement of less than 0.01 m. This indicates that the possibility of hydraulically induced fractures at great depth (thousands of meters) causing activation of faults and creation of a new flow path that can reach shallow groundwater resources (or even the surface) is remote. The expected low permeability of faults in producible shale is clearly a limiting factor for the possible rupture length and seismic magnitude. In fact, for a fault that is initially nearly-impermeable, the only possibility of larger fault slip event would be opening by hydraulic fracturing; this would allow pressure to penetrate the matrix along the fault and to reduce the frictional strength over a sufficiently large fault surface patch. However, our simulation results show that if the fault is initially impermeable, hydraulic fracturing along the fault results in numerous small micro-seismic events along with the propagation, effectively

  6. Fault-related fold styles and progressions in fold-thrust belts: Insights from sandbox modeling

    Science.gov (United States)

    Yan, Dan-Ping; Xu, Yan-Bo; Dong, Zhou-Bin; Qiu, Liang; Zhang, Sen; Wells, Michael

    2016-03-01

    Fault-related folds of variable structural styles and assemblages commonly coexist in orogenic belts with competent-incompetent interlayered sequences. Despite their commonality, the kinematic evolution of these structural styles and assemblages are often loosely constrained because multiple solutions exist in their structural progression during tectonic restoration. We use a sandbox modeling instrument with a particle image velocimetry monitor to test four designed sandbox models with multilayer competent-incompetent materials. Test results reveal that decollement folds initiate along selected incompetent layers with decreasing velocity difference and constant vorticity difference between the hanging wall and footwall of the initial fault tips. The decollement folds are progressively converted to fault-propagation folds and fault-bend folds through development of fault ramps breaking across competent layers and are followed by propagation into fault flats within an upper incompetent layer. Thick-skinned thrust is produced by initiating a decollement fault within the metamorphic basement. Progressive thrusting and uplifting of the thick-skinned thrust trigger initiation of the uppermost incompetent decollement with formation of a decollement fold and subsequent converting to fault-propagation and fault-bend folds, which combine together to form imbricate thrust. Breakouts at the base of the early formed fault ramps along the lowest incompetent layers, which may correspond to basement-cover contacts, domes the upmost decollement and imbricate thrusts to form passive roof duplexes and constitute the thin-skinned thrust belt. Structural styles and assemblages in each of tectonic stages are similar to that in the representative orogenic belts in the South China, Southern Appalachians, and Alpine orogenic belts.

  7. Experimental determination of the long-term strength and stability of laterally bounding fault zones in CO2 storage reservoirs based on kinetic modeling of fault zone evolution

    Science.gov (United States)

    Samuelson, J. E.; Koenen, M.; Tambach, T.

    2011-12-01

    Long-term sequestration of CO2, harvested from point sources such as coal burning power plants and cement manufactories, in depleted oil and gas reservoirs is considered to be one of the most attractive options for short- to medium-term mitigation of anthropogenic forcing of climate change. Many such reservoirs are laterally bounded by low-permeability fault zones which could potentially be reactivated either by changes in stress state during and after the injection process, and also by alterations in the frictional strength of fault gouge material. Of additional concern is how the stability of the fault zones will change as a result of the influence of supercritical CO2, specifically whether the rate and state frictional constitutive parameters (a, b, DC) of the fault zone will change in such a way as to enhance the likelihood of seismic activity on the fault zone. The short-term influence of CO2 on frictional strength and stability of simulated fault gouges prepared from mixtures of cap rock and reservoir rock has been analyzed recently [Samuelson et al., In Prep.], concluding that CO2 has little influence on frictional constitutive behavior on the timescale of a typical experiment (CO2 is intended to be sequestered, we have chosen to model the long-term mineralogical alteration of a fault zone with a simple starting mineralogy of 33% quartz, 33% illite, and 33% dolomite by weight using the geochemical modeling program PHREEQC and the THERMODDEM database, assuming instantaneous mixing of the CO2 with the fault gouge. The geochemical modeling predicts that equilibrium will be reached between fault gouge, reservoir brine, and CO2 in approximately 440 years, assuming an average grain-size (davg) of 20 μm, and ~90 years assuming davg =4 μm, a reasonable range of grain-sizes for natural fault gouges. The main change to gouge mineralogy comes from the complete dissolution of illite, and the precipitation of muscovite. The final equilibrium mineralogy of the fault

  8. Numerical model of formation of a 3-D strike-slip fault system

    Science.gov (United States)

    Chemenda, Alexandre I.; Cavalié, Olivier; Vergnolle, Mathilde; Bouissou, Stéphane; Delouis, Bertrand

    2016-01-01

    The initiation and the initial evolution of a strike-slip fault are modeled within an elastoplasticity constitutive framework taking into account the evolution of the hardening modulus with inelastic straining. The initial and boundary conditions are similar to those of the Riedel shear experiment. The models first deform purely elastically. Then damage (inelastic deformation) starts at the model surface. The damage zone propagates both normal to the forming fault zone and downwards. Finally, it affects the whole layer thickness, forming flower-like structure in cross-section. At a certain stage, a dense set of parallel Riedel shears forms at shallow depth. A few of these propagate both laterally and vertically, while others die. The faults first propagate in-plane, but then rapidly change direction to make a larger angle with the shear axis. New fault segments form as well, resulting in complex 3-D fault zone architecture. Different fault segments accommodate strike-slip and normal displacements, which results in the formation of valleys and rotations along the fault system.

  9. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    Science.gov (United States)

    Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  10. Analysis and implementation of power managment and control strategy for six-phase multilevel ac drive system in fault condition

    DEFF Research Database (Denmark)

    Sanjeevikumar, P.; Grandi, Gabriele; Blaabjerg, Frede

    2016-01-01

    This research article exploits the power management algorithm in post-fault conditions for a six-phase (quad) multilevel inverter. The drive circuit consists of four 2-level, three-phase voltage source inverter (VSI) supplying a six-phase open-end windings motor or/impedance load, with circumstan......This research article exploits the power management algorithm in post-fault conditions for a six-phase (quad) multilevel inverter. The drive circuit consists of four 2-level, three-phase voltage source inverter (VSI) supplying a six-phase open-end windings motor or/impedance load...... sources. The developed post-fault algorithm is applied when there is a fault by one VSI and the load is fed from the remaining three healthy VSIs. In faulty conditions the multilevel outputs are reduced from 3-level to 2-level, but still the system propagates with degraded power. Numerical simulation...

  11. A Fault Diagnosis Approach for Gears Based on IMF AR Model and SVM

    Directory of Open Access Journals (Sweden)

    Yu Yang

    2008-05-01

    Full Text Available An accurate autoregressive (AR model can reflect the characteristics of a dynamic system based on which the fault feature of gear vibration signal can be extracted without constructing mathematical model and studying the fault mechanism of gear vibration system, which are experienced by the time-frequency analysis methods. However, AR model can only be applied to stationary signals, while the gear fault vibration signals usually present nonstationary characteristics. Therefore, empirical mode decomposition (EMD, which can decompose the vibration signal into a finite number of intrinsic mode functions (IMFs, is introduced into feature extraction of gear vibration signals as a preprocessor before AR models are generated. On the other hand, by targeting the difficulties of obtaining sufficient fault samples in practice, support vector machine (SVM is introduced into gear fault pattern recognition. In the proposed method in this paper, firstly, vibration signals are decomposed into a finite number of intrinsic mode functions, then the AR model of each IMF component is established; finally, the corresponding autoregressive parameters and the variance of remnant are regarded as the fault characteristic vectors and used as input parameters of SVM classifier to classify the working condition of gears. The experimental analysis results show that the proposed approach, in which IMF AR model and SVM are combined, can identify working condition of gears with a success rate of 100% even in the case of smaller number of samples.

  12. Explaining the current geodetic field with geological models: A case study of the Haiyuan fault system

    Science.gov (United States)

    Daout, S.; Jolivet, R.; Lasserre, C.; Doin, M. P.; Barbot, S.; Peltzer, G.; Tapponnier, P.

    2015-12-01

    Oblique convergence across Tibet leads to slip partitioning with the co-existence of strike-slip, normal and thrust motion in major fault systems. While such complexity has been shown at the surface, the question is to understand how faults interact and accumulate strain at depth. Here, we process InSAR data across the central Haiyuan restraining bend, at the north-eastern boundary of the Tibetan plateau and show that the surface complexity can be explained by partitioning of a uniform deep-seated convergence rate. We construct a time series of ground deformation, from Envisat radar data spanning from 2001-2011 period, across a challenging area because of the high jump in topography between the desert environment and the plateau. To improve the signal-to-noise ratio, we used the latest Synthetic Aperture Radar interferometry methodology, such as Global Atmospheric Models (ERA Interim) and Digital Elevation Model errors corrections before unwrapping. We then developed a new Bayesian approach, jointly inverting our InSAR time series together with published GPS displacements. We explore fault system geometry at depth and associated slip rates and determine a uniform N86±7E° convergence rate of 8.45±1.4 mm/yr across the whole fault system with a variable partitioning west and east of a major extensional fault-jog. Our 2D model gives a quantitative understanding of how crustal deformation is accumulated by the various branches of this thrust/strike-slip fault system and demonstrate the importance of the geometry of the Haiyuan Fault, controlling the partitioning or the extrusion of the block motion. The approach we have developed would allow constraining the low strain accumulation along deep faults, like for example for the blind thrust faults or possible detachment in the San Andreas "big bend", which are often associated to a poorly understood seismic hazard.

  13. Establishment of a Fault Prognosis Model Using Wavelet Neural Networks and Its Engineering Application

    Institute of Scientific and Technical Information of China (English)

    LIU Qi-peng; FENG Quan-ke; XIONG Wei

    2004-01-01

    Fault diagnosis is confronted with two problems; how to "measure" the growth of a fault and how to predict the remaining useful lifetime of such a failing component or machine.This paper attempts to solve these two problems by proposing a model of fault prognosis using wavelet basis neural network.Gaussian radial basis functions and Mexican hat wavelet frames are used us scaling functions and wavelets,respectively.The centers of the basis functions are calculated using a dyadic expansion scheme and a k-means clustering algorithm.

  14. Model-Based Fault Tolerant Control for Hybrid Dynamic Systems with Sensor Faults%一类带有传染器故障的混合系统的容错控制

    Institute of Scientific and Technical Information of China (English)

    杨浩; 冒泽慧; 姜斌

    2006-01-01

    A model-based fault tolerant control approach for hybrid linear dynamic systems is proposed in this paper. The proposed method, taking advantage of reliable control, can maintain the performance of the faulty system during the time delay of fault detection and diagnosis (FDD) and fault accommodation (FA), which can be regarded as the first line of defence against sensor faults.Simulation results of a three-tank system with sensor fault are given to show the efficiency of the method.

  15. FAULT IDENTIFICATION IN HETEROGENEOUS NETWORKS USING TIME SERIES ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    孙钦东; 张德运; 孙朝晖

    2004-01-01

    Fault management is crucial to provide quality of service grantees for the future networks, and fault identification is an essential part of it. A novel fault identification algorithm is proposed in this paper, which focuses on the anomaly detection of network traffic. Since the fault identification has been achieved using statistical information in management information base, the algorithm is compatible with the existing simple network management protocol framework. The network traffic time series is verified to be non-stationary. By fitting the adaptive autoregressive model, the series is transformed into a multidimensional vector. The training samples and identifiers are acquired from the network simulation. A k-nearest neighbor classifier identifies the system faults after being trained. The experiment results are consistent with the given fault scenarios, which prove the accuracy of the algorithm. The identification errors are discussed to illustrate that the novel fault identification algorithm is adaptive in the fault scenarios with network traffic change.

  16. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  17. FUNCTIONAL MODELLING FOR FAULT DIAGNOSIS AND ITS APPLICATION FOR NPP

    Directory of Open Access Journals (Sweden)

    MORTEN LIND

    2014-12-01

    Full Text Available The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM, which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  18. An Improved NHPP Model with Time-Varying Fault Removal Delay

    Institute of Scientific and Technical Information of China (English)

    Xue Yang; Nan Sang; Hang Lei

    2008-01-01

    In this paper, an improved NHPP model isproposed by replacing constant fault removal time withtime-varying fault removal delay in NHPP model,proposed by Daniel R Jeske. In our model, a time-dependent delay function is established to fit the faultremoval process. By using two sets of practical data, thedescriptive and predictive abilities of the improved NHPPmodel are compared with those of the NHPP model, G-Omodel, and delayed S-shape model. The results show that the improved model can fit and predict the data better.

  19. Near-Surface Fault Structures of the Seulimuem Segment Based on Electrical Resistivity Model

    Science.gov (United States)

    Ismail, Nazli; Yanis, Muhammad; Idris, Syafrizal; Abdullah, Faisal; Hanafiah, Bukhari

    2017-05-01

    The Great Sumatran Fault (GSF) system is arc-parallel strike-slip fault system along the volcanic front related to the oblique subduction of the oceanic Indo-Australian plate. Large earthquakes along the southern GSF since 1892 have been reported, but the Seulimuem segment at the northernmost Sumatran has not produced large earthquakes in the past 100 years. The 200-km-long segment is considered to be a seismic gap. Detailed geological study of the fault and thus its surface trace locations, late Quaternary slip rate, and rupture history are urgently needed for earthquake disaster mitigation in the future. However, finding a suitable area for paleoseismic trenching is an obstacle when the fault traces are not clearly shown on the surface. We have conducted geoelectrical measurement in Lamtamot area of Aceh Besar District in order to locate the fault line for paleoseismic excavation. Apparent resistivity data were collected along 40 m profile parallel to the planned trenching site. The 2D electrical resistivity model provided evidence of some resistivity anomalies by high lateral contrast. This anomaly almost coincides with the topographic scarp which is modified by agriculture on the surface at the northern part of Lamtamot. The steep dipping electrical contrast may correspond to a fault. However, the model does not resolve well evidences from minor faults that can be related to the presence of surface ruptures. A near fault paleoseismic investigation requires trenching across the fault in order to detect and analyze the geological record of the past large earthquakes along the Seulimuem segment.

  20. The disruption management model.

    Science.gov (United States)

    McAlister, James

    2011-10-01

    Within all organisations, business continuity disruptions present a set of dilemmas that managers may not have dealt with before in their normal daily duties. The disruption management model provides a simple but effective management tool to enable crisis management teams to stay focused on recovery in the midst of a business continuity incident. The model has four chronological primary headlines, which steer the team through a quick-time crisis decision-making process. The procedure facilitates timely, systematic, rationalised and justified decisions, which can withstand post-event scrutiny. The disruption management model has been thoroughly tested within an emergency services environment and is proven to significantly support clear and concise decision making in a business continuity context.

  1. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    Science.gov (United States)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and

  2. Fault Tolerance Assistant (FTA): An Exception Handling Programming Model for MPI Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Aiman [Univ. of Chicago, IL (United States). Dept. of Computer Science; Laguna, Ignacio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sato, Kento [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Islam, Tanzima [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-23

    Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enables failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.

  3. Bayesian Network Based Fault Prognosis via Bond Graph Modeling of High-Speed Railway Traction Device

    Directory of Open Access Journals (Sweden)

    Yunkai Wu

    2015-01-01

    component-level faults accurately for a high-speed railway traction system, a fault prognosis approach via Bayesian network and bond graph modeling techniques is proposed. The inherent structure of a railway traction system is represented by bond graph model, based on which a multilayer Bayesian network is developed for fault propagation analysis and fault prediction. For complete and incomplete data sets, two different parameter learning algorithms such as Bayesian estimation and expectation maximization (EM algorithm are adopted to determine the conditional probability table of the Bayesian network. The proposed prognosis approach using Pearl’s polytree propagation algorithm for joint probability reasoning can predict the failure probabilities of leaf nodes based on the current status of root nodes. Verification results in a high-speed railway traction simulation system can demonstrate the effectiveness of the proposed approach.

  4. Takagi Sugeno fuzzy expert model based soft fault diagnosis for two tank interacting system

    Directory of Open Access Journals (Sweden)

    Manikandan Pandiyan

    2014-09-01

    Full Text Available The inherent characteristics of fuzzy logic theory make it suitable for fault detection and diagnosis (FDI. Fault detection can benefit from nonlinear fuzzy modeling and fault diagnosis can profit from a transparent reasoning system, which can embed operator experience, but also learn from experimental and/or simulation data. Thus, fuzzy logic-based diagnostic is advantageous since it allows the incorporation of a-priori knowledge and lets the user understand the inference of the system. In this paper, the successful use of a fuzzy FDI based system, based on dynamic fuzzy models for fault detection and diagnosis of an industrial two tank system is presented. The plant data is used for the design and validation of the fuzzy FDI system. The validation results show the effectiveness of this approach.

  5. Power transformer fault diagnosis model based on rough set theory with fuzzy representation

    Institute of Scientific and Technical Information of China (English)

    Li Minghua; Dong Ming; Yan Zhang

    2007-01-01

    Objective Due to the incompleteness and complexity of fault diagnosis for power transformers, a comprehensive rough-fuzzy scheme for solving fault diagnosis problems is presented. Fuzzy set theory is used both for representation of incipient faults' indications and producing a fuzzy granulation of the feature space. Rough set theory is used to obtain dependency rules that model indicative regions in the granulated feature space. The fuzzy membership functions corresponding to the indicative regions, modelled by rules, are stored as cases. Results Diagnostic conclusions are made using a similarity measure based on these membership functions. Each case involves only a reduced number of relevant features making this scheme suitable for fault diagnosis. Conclusion Superiority of this method in terms of classification accuracy and case generation is demonstrated.

  6. Fault identification using piezoelectric impedance measurement and model-based intelligent inference with pre-screening

    Science.gov (United States)

    Shuai, Q.; Zhou, K.; Zhou, Shiyu; Tang, J.

    2017-04-01

    While piezoelectric impedance/admittance measurements have been used for fault detection and identification, the actual identification of fault location and severity remains to be a challenging topic. On one hand, the approach that uses these measurements entertains high detection sensitivity owing to the high-frequency actuation/sensing nature. On the other hand, high-frequency analysis requires high dimensionality in the model and the subsequent inverse analysis contains a very large number of unknowns which often renders the identification problem under-determined. A new fault identification algorithm is developed in this research for piezoelectric impedance/admittance based measurement. Taking advantage of the algebraic relation between the sensitivity matrix and the admittance change measurement, we devise a pre-screening scheme that can rank the likelihoods of fault locations with estimated fault severity levels, which drastically reduces the fault parameter space. A Bayesian inference approach is then incorporated to pinpoint the fault location and severity with high computational efficiency. The proposed approach is examined and validated through case studies.

  7. FE modeling of present day tectonic stress along the San Andreas Fault zone

    OpenAIRE

    Koirala, Matrika Prasad; Hauashi, Daigoro; 林, 大五郎

    2009-01-01

    F E modeling under plane stress condition is used to analyze the state of stress in and around the San Andreas Fault (SAF) System taking whole area of California. In this study we mainly focus on the state of stress at the general seismogenic depth of 12 km, imposing elastic rheology. The purpose of the present study is to simulate the regional stress field, displacement vectors and failures. Stress perturbation due to major fault, its geometry and major branches are analyzed. Depthwise varia...

  8. MAIN REGULARITIES OF FAULTING IN LITHOSPHERE AND THEIR APPLICATION (BASED ON PHYSICAL MODELLING RESULTS

    Directory of Open Access Journals (Sweden)

    S. A. Bornyakov

    2015-09-01

    Full Text Available Results of long-term experimental studies and modelling of faulting are briefly reviewed, and research methods and the-state-of-art issues are described. The article presents the main results of faulting modelling with the use of non-transparent elasto-viscous plastic and optically active models. An area of active dynamic influence of fault (AADIF is the term introduced to characterise a fault as a 3D geological body. It is shown that AADIF's width (М is determined by thickness of the layer wherein a fault occurs (Н, its viscosity (η and strain rate (V. Multiple correlation equations are proposed to show relationships between AADIF's width (М, H, η and V for faults of various morphological and genetic types. The irregularity of AADIF in time and space is characterised in view of staged formation of the internal fault structure of such areas and geometric and dynamic parameters of AADIF which are changeable along the fault strike. The authors pioneered in application of the open system conception to find explanations of regularities of structure formation in AADIFs. It is shown that faulting is a synergistic process of continuous changes of structural levels of strain, which differ in manifestation of specific self-similar fractures of various scales. Such levels are changeable due to self-organization processes of fracture systems. Fracture dissipative structures (FDS is the term introduced to describe systems of fractures that are subject to self-organization. It is proposed to consider informational entropy and fractal dimensions in order to reveal FDS in AADIF. Studied are relationships between structure formation in AADIF and accompanying processes, such as acoustic emission and terrain development above zones wherein faulting takes place. Optically active elastic models were designed to simulate the stress-and-strain state of AADIF of main standard types of fault jointing zones and their analogues in nature, and modelling results are

  9. Application of black-box models to HVAC systems for fault detection

    NARCIS (Netherlands)

    Peitsman, H.C.; Bakker, V.E.

    1996-01-01

    This paper describes the application of black-box models for fault detection and diagnosis (FDD) in heating, ventilat-ing, and air-conditioning (HVAC) systems. In this study, mul-tiple-input/single-output (MISO) ARX models and artificial neural network (ANN) models are used. The ARX models are exami

  10. Application of black-box models to HVAC systems for fault detection

    NARCIS (Netherlands)

    Peitsman, H.C.; Bakker, V.E.

    1996-01-01

    This paper describes the application of black-box models for fault detection and diagnosis (FDD) in heating, ventilat-ing, and air-conditioning (HVAC) systems. In this study, mul-tiple-input/single-output (MISO) ARX models and artificial neural network (ANN) models are used. The ARX models are

  11. Investigating the possible effects of salt in the fault zones on rates of seismicity - insights from analogue and numerical modeling

    Science.gov (United States)

    Urai, Janos; Kettermann, Michael; Abe, Steffen

    2017-04-01

    The presence of salt in dilatant normal faults may have a strong influence on fault mechanics and related seismicity. However, we lack a detailed understanding of these processes. This study is based on the geological setting of the Groningen area. During tectonic faulting in the Groningen area, rock salt may have flown downwards into dilatant faults, which thus may contain lenses of rock salt at present. Because of its viscous properties, the presence of salt lenses in a fault may introduce a strain-rate dependency to the faulting and affect the distribution of magnitudes of seismic events. We present a "proof of concept" showing that the above processes can be investigated using a combination of analogue and numerical modeling. Full scaling and discussion of the importance of these processes to induced seismicity in Groningen require further, more detailed study. The analogue experiments are based on a simplified stratigraphy of the Groningen area, where it is generally thought that most of the Rotliegend faulting has taken place in the Jurassic, after deposition of the Zechstein. This is interpreted to mean that at the time of faulting the sulphates were brittle anhydrite. If these layers were sufficiently brittle to fault in a dilatant fashion, rock salt could flow downwards into the dilatant fractures. To test this hypothesis, we use sandbox experiments where we combine cohesive powder as analog for brittle anhydrites and carbonates with viscous salt analogs to explore the developing fault geometry and the resulting distribution of salt in the faults. In the numerical models we investigate the stick-slip behavior of fault zones containing ductile material using the Discrete Element Method (DEM). Results show that the DEM approach is in principle suitable for the modeling of the seismicity of faults containing salt: the stick-slip motion of the fault becomes dependent on shear loading rate with a modification of the frequency magnitude distribution of the

  12. Modelling roughness evolution and debris production in faults using discrete particles

    Science.gov (United States)

    Mair, Karen; Abe, Steffen

    2017-04-01

    The frictional strength and stability (hence seismic potential) of faults in the brittle part of the crust is closely linked to fault roughness evolution and debris production during accumulated slip. The relevant processes may also control the dynamics of rock-slides, avalanches and sub glacial slip thus are of general interest in several fields. The quantitative characterisation of fault surfaces in the field (e.g. Candela et al. JGR, 2012) has helped build a picture of fault roughness across many orders of magnitude, however, since fault zones are generally not exposed during slip and gouge zones rarely preserved, the mechanical implications of evolving roughness and the important role of debris or gouge in fault zone evolution remain elusive. Here we investigate the interplay between fault roughness evolution and gouge production using 3D Discrete Element Method (DEM) Boundary Erosion Models. Our fault walls are composed of many particles or clusters stuck together with breakable bonds. When bond strength is exceeded, the walls fracture to produce erodible boundaries and a debris filled fault zone that evolves with accumulated slip. We slide two initially bare surfaces past each other under a range of normal stresses, tracking the evolving topography of eroded fault walls, the granular debris generated and the associated mechanical behaviour. The development of slip parallel striations, reminiscent of those found in natural faults, are commonly observed, however often as transient rather than persistent features. At the higher normal stresses studied, we observe a two stage wear-like gouge production where an initial 'running-in' high production rate saturates as debris accumulates and separates the walls. As shear, and hence granular debris, accumulates, we see evidence of grain size based sorting in the granular layers. Wall roughness and friction mimic this stabilisation, highlighting a direct link between gouge processes, wall roughness evolution and

  13. A Desk-top tutorial Demonstration of Model-based Fault Detection and Diagnosis

    OpenAIRE

    Shi, John Z.; Elshanti, Ali; Gu, Fengshou; Ball, Andrew

    2007-01-01

    In this paper, a demonstration on the model-based approach for fault detection has been presented. The aim of this demo is to provide students a desk-top tool to start learning model-based approach. The demo works on a traditional three-tank system. After a short review of the model-based approach, this paper emphasizes on two difficulties often asked by students when they start learning model-based approach: how to develop a system model and how to generate residual for fault detection. The ...

  14. Design for interaction between humans and intelligent systems during real-time fault management

    Science.gov (United States)

    Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.

    1992-01-01

    Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.

  15. Bond graphs for modelling, control and fault diagnosis of engineering systems

    CERN Document Server

    2017-01-01

    This book presents theory and latest application work in Bond Graph methodology with a focus on: • Hybrid dynamical system models, • Model-based fault diagnosis, model-based fault tolerant control, fault prognosis • and also addresses • Open thermodynamic systems with compressible fluid flow, • Distributed parameter models of mechanical subsystems. In addition, the book covers various applications of current interest ranging from motorised wheelchairs, in-vivo surgery robots, walking machines to wind-turbines.The up-to-date presentation has been made possible by experts who are active members of the worldwide bond graph modelling community. This book is the completely revised 2nd edition of the 2011 Springer compilation text titled Bond Graph Modelling of Engineering Systems – Theory, Applications and Software Support. It extends the presentation of theory and applications of graph methodology by new developments and latest research results. Like the first edition, this book addresses readers in a...

  16. Analogue modelling of the effect of topographic steps in the development of strike-slip faults

    Science.gov (United States)

    Tomás, Ricardo; Duarte, João C.; Rosas, Filipe M.; Schellart, Wouter; Strak, Vincent

    2016-04-01

    Strike-slip faults often cut across regions of overthickened crust, such as oceanic plateaus or islands. These morphological steps likely cause a local variation in the stress field that controls the geometry of these systems. Such variation in the stress field will likely play a role in strain localization and associated seismicity. This is of particular importance since wrench systems can produce very high magnitude earthquakes. However, such systems have been generally overlooked and are still poorly understood. In this work we will present a set of analogue models that were designed with the objective of understanding how a step in the morphology affects the development of a strike-slip fault system. The models consist of a sand-cake with two areas with different thicknesses connected by a gentle ramp perpendicular to a dextral strike-slip basal fault. The sand-cake lies above two basal plates to which the dextral relative motion was imposed using a stepping-motor. Our results show that a Riedel fault system develops across the two flat areas. However, a very asymmetric fault pattern develops across the morphological step. A deltoid constrictional bulge develops in the thinner part of the model, which progressively acquires a sigmoidal shape with increasing offset. In the thicker part of the domain, the deformation is mostly accommodated by Riedel faults and the one closer to the step acquires a relatively lower angle. Associated to this Riedel fault a collapse area develops and amplifies with increasing offset. For high topographic steps, the propagation of the main fault across the step area only occurs in the final stages of the experiments, contrary to what happens when the step is small or inexistent. These results strongly suggest a major impact of the variation of topography on the development of strike-slip fault systems. The step in the morphology causes variations in the potential energy that changes the local stress field (mainly the vertical

  17. Exploring tectonomagmatic controls on mid-ocean ridge faulting and morphology with 3-D numerical models

    Science.gov (United States)

    Howell, S. M.; Ito, G.; Behn, M. D.; Olive, J. A. L.; Kaus, B.; Popov, A.; Mittelstaedt, E. L.; Morrow, T. A.

    2016-12-01

    Previous two-dimensional (2-D) modeling studies of abyssal-hill scale fault generation and evolution at mid-ocean ridges have predicted that M, the ratio of magmatic to total extension, strongly influences the total slip, spacing, and rotation of large faults, as well as the morphology of the ridge axis. Scaling relations derived from these 2-D models broadly explain the globally observed decrease in abyssal hill spacing with increasing ridge spreading rate, as well as the formation of large-offset faults close to the ends of slow-spreading ridge segments. However, these scaling relations do not explain some higher resolution observations of segment-scale variability in fault spacing along the Chile Ridge and the Mid-Atlantic Ridge, where fault spacing shows no obvious correlation with M. This discrepancy between observations and 2-D model predictions illuminates the need for three-dimensional (3-D) numerical models that incorporate the effects of along-axis variations in lithospheric structure and magmatic accretion. To this end, we use the geodynamic modeling software LaMEM to simulate 3-D tectono-magmatic interactions in a visco-elasto-plastic lithosphere under extension. We model a single ridge segment subjected to an along-axis gradient in the rate of magma injection, which is simulated by imposing a mass source in a plane of model finite volumes beneath the ridge axis. Outputs of interest include characteristic fault offset, spacing, and along-axis gradients in seafloor morphology. We also examine the effects of along-axis variations in lithospheric thickness and off-axis thickening rate. The main objectives of this study are to quantify the relative importance of the amount of magmatic extension and the local lithospheric structure at a given along-axis location, versus the importance of along-axis communication of lithospheric stresses on the 3-D fault evolution and morphology of intermediate-spreading-rate ridges.

  18. Analysis of Dynamics in Multiphysics Modelling of Active Faults

    Directory of Open Access Journals (Sweden)

    Sotiris Alevizos

    2016-09-01

    Full Text Available Instabilities in Geomechanics appear on multiple scales involving multiple physical processes. They appear often as planar features of localised deformation (faults, which can be relatively stable creep or display rich dynamics, sometimes culminating in earthquakes. To study those features, we propose a fundamental physics-based approach that overcomes the current limitations of statistical rule-based methods and allows a physical understanding of the nucleation and temporal evolution of such faults. In particular, we formulate the coupling between temperature and pressure evolution in the faults through their multiphysics energetic process(es. We analyse their multiple steady states using numerical continuation methods and characterise their transient dynamics by studying the time-dependent problem near the critical Hopf points. We find that the global system can be characterised by a homoclinic bifurcation that depends on the two main dimensionless groups of the underlying physical system. The Gruntfest number determines the onset of the localisation phenomenon, while the dynamics are mainly controlled by the Lewis number, which is the ratio of energy diffusion over mass diffusion. Here, we show that the Lewis number is the critical parameter for dynamics of the system as it controls the time evolution of the system for a given energy supply (Gruntfest number.

  19. Correlation between Cu mineralization and major faults using multifractal modelling in the Tarom area (NW Iran)

    Science.gov (United States)

    Nouri, Reza; Jafari, Mohammad Reza; Arian, Mehran; Feizi, Faranak; Afzal, Peyman

    2013-10-01

    The Tarom 1: 100,000 sheet is located within the Cenozoic Tarom-Hashtjin volcano-plutonic belt, NW Iran. Reconstruction of the tectonic and structural setting of the hydrothermal deposits is fundamental to predictive models of different ore deposits. Since fractal/multifractal modelling is an effective instrument for separation of geological and mineralized zones from background, therefore Concentration-Distance to Major Fault (C-DMF) fractal model and distribution of Cu anomalies were used to classify Cu mineralizations according to their distance to major faults. Application of the C-DMF model for the classification of Cu mineralization in the Tarom 1: 100,000 sheet reveals that the main copper mineralizations have a strong correlation with their distance to major faults in the area. The distances of known copper mineralizations having Cu values higher than 2.2 % to major faults are less than 10 km showing a positive correlation between Cu mineralization and tectonic events. Moreover, extreme and high Cu anomalies based on stream sediments and lithogeochemical data were identified by the Number-Size (N-S) fractal model. These anomalies have distances to major faults less than 10 km and validate the results derived via the C-DMF fractal model. The C-DMF fractal modelling can be utilized for the reconnaissance and prospecting of magmatic and hydrothermal deposits.

  20. Model-based fault detection of blade pitch system in floating wind turbines

    Science.gov (United States)

    Cho, S.; Gao, Z.; Moan, T.

    2016-09-01

    This paper presents a model-based scheme for fault detection of a blade pitch system in floating wind turbines. A blade pitch system is one of the most critical components due to its effect on the operational safety and the dynamics of wind turbines. Faults in this system should be detected at the early stage to prevent failures. To detect faults of blade pitch actuators and sensors, an appropriate observer should be designed to estimate the states of the system. Residuals are generated by a Kalman filter and a threshold based on H optimization, and linear matrix inequality (LMI) is used for residual evaluation. The proposed method is demonstrated in a case study that bias and fixed output in pitch sensors and stuck in pitch actuators. The simulation results show that the proposed method detects different realistic fault scenarios of wind turbines under the stochastic external winds.

  1. Robust unknown input observer design for state estimation and fault detection using linear parameter varying model

    Science.gov (United States)

    Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai

    2017-01-01

    This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.

  2. Thrust-wrench fault interference in a brittle medium: new insights from analogue modelling experiments

    Science.gov (United States)

    Rosas, Filipe; Duarte, Joao; Schellart, Wouter; Tomas, Ricardo; Grigorova, Vili; Terrinha, Pedro

    2015-04-01

    We present analogue modelling experimental results concerning thrust-wrench fault interference in a brittle medium, to try to evaluate the influence exerted by different prescribed interference angles in the formation of morpho-structural interference fault patterns. All the experiments were conceived to simulate simultaneous reactivation of confining strike-slip and thrust faults defining a (corner) zone of interference, contrasting with previously reported discrete (time and space) superposition of alternating thrust and strike-slip events. Different interference angles of 60°, 90° and 120° were experimentally investigated by comparing the specific structural configurations obtained in each case. Results show that a deltoid-shaped morpho-structural pattern is consistently formed in the fault interference (corner) zone, exhibiting a specific geometry that is fundamentally determined by the different prescribed fault interference angle. Such angle determines the orientation of the displacement vector shear component along the main frontal thrust direction, determining different fault confinement conditions in each case, and imposing a complying geometry and kinematics of the interference deltoid structure. Model comparison with natural examples worldwide shows good geometric and kinematic similarity, pointing to the existence of matching underlying dynamic process. Acknowledgments This work was sponsored by the Fundação para a Ciência e a Tecnologia (FCT) through project MODELINK EXPL/GEO-GEO/0714/2013.

  3. HTS-FCL EMTDC model considering nonlinear characteristics on fault current and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jae-Young; Lee, Seung-Ryul [Korea Electrotechnology Research Institute (Korea, Republic of)

    2010-06-01

    One of the most serious problems of the KEPCO system is a higher fault current than the CB(Circuit breaker's SCC (Short Circuit Capacity). There are so many alternatives to reduce the higher fault current, such as the isolation of bus ties, enhancement of the CB's SCC, and the application of HVDC-BTB (Back to Back) or FCL (fault current limiter). However, these alternatives have drawbacks from the viewpoint of system stability and cost. As superconductivity technology has been developed, the resistive type (R-type) HTS-FCL (High Temperature Superconductor Fault Current Limiter) offers one of the important alternatives in terms of power loss and cost reduction in solving the fault current problem. To evaluate the accurate transient performance of R-type HTS-FCL, it is necessary for the dynamic simulation model to consider transient characteristics during the quenching and the recovery state. Against this background, this paper presents the new HTS-FCL EMTDC (Electro-Magnetic Transients including Direct Current) model considering the nonlinear characteristics on fault current and temperature.

  4. HTS-FCL EMTDC model considering nonlinear characteristics on fault current and temperature

    Science.gov (United States)

    Yoon, Jae-Young; Lee, Seung-Ryul

    2010-06-01

    One of the most serious problems of the KEPCO system is a higher fault current than the CB(Circuit breaker's SCC (Short Circuit Capacity). There are so many alternatives to reduce the higher fault current, such as the isolation of bus ties, enhancement of the CB's SCC, and the application of HVDC-BTB (Back to Back) or FCL (fault current limiter). However, these alternatives have drawbacks from the viewpoint of system stability and cost. As superconductivity technology has been developed, the resistive type (R-type) HTS-FCL (High Temperature Superconductor Fault Current Limiter) offers one of the important alternatives in terms of power loss and cost reduction in solving the fault current problem. To evaluate the accurate transient performance of R-type HTS-FCL, it is necessary for the dynamic simulation model to consider transient characteristics during the quenching and the recovery state. Against this background, this paper presents the new HTS-FCL EMTDC (Electro-Magnetic Transients including Direct Current) model considering the nonlinear characteristics on fault current and temperature.

  5. Forward and backward models for fault diagnosis based on parallel genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    Yi LIU; Ying LI; Yi-jia CAO; Chuang-xin GUO

    2008-01-01

    In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.

  6. Modeling of coulpled deformation and permeability evolution during fault reactivation induced by deep underground injection of CO2

    Energy Technology Data Exchange (ETDEWEB)

    Cappa, F.; Rutqvist, J.

    2010-06-01

    The interaction between mechanical deformation and fluid flow in fault zones gives rise to a host of coupled hydromechanical processes fundamental to fault instability, induced seismicity, and associated fluid migration. In this paper, we discuss these coupled processes in general and describe three modeling approaches that have been considered to analyze fluid flow and stress coupling in fault-instability processes. First, fault hydromechanical models were tested to investigate fault behavior using different mechanical modeling approaches, including slip interface and finite-thickness elements with isotropic or anisotropic elasto-plastic constitutive models. The results of this investigation showed that fault hydromechanical behavior can be appropriately represented with the least complex alternative, using a finite-thickness element and isotropic plasticity. We utilized this pragmatic approach coupled with a strain-permeability model to study hydromechanical effects on fault instability during deep underground injection of CO{sub 2}. We demonstrated how such a modeling approach can be applied to determine the likelihood of fault reactivation and to estimate the associated loss of CO{sub 2} from the injection zone. It is shown that shear-enhanced permeability initiated where the fault intersects the injection zone plays an important role in propagating fault instability and permeability enhancement through the overlying caprock.

  7. Fault Modeling and Testing for Analog Circuits in Complex Space Based on Supply Current and Output Voltage

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available This paper deals with the modeling of fault for analog circuits. A two-dimensional (2D fault model is first proposed based on collaborative analysis of supply current and output voltage. This model is a family of circle loci on the complex plane, and it simplifies greatly the algorithms for test point selection and potential fault simulations, which are primary difficulties in fault diagnosis of analog circuits. Furthermore, in order to reduce the difficulty of fault location, an improved fault model in three-dimensional (3D complex space is proposed, which achieves a far better fault detection ratio (FDR against measurement error and parametric tolerance. To address the problem of fault masking in both 2D and 3D fault models, this paper proposes an effective design for testability (DFT method. By adding redundant bypassing-components in the circuit under test (CUT, this method achieves excellent fault isolation ratio (FIR in ambiguity group isolation. The efficacy of the proposed model and testing method is validated through experimental results provided in this paper.

  8. A Model of Intelligent Fault Diagnosis of Power Equipment Based on CBR

    Directory of Open Access Journals (Sweden)

    Gang Ma

    2015-01-01

    Full Text Available Nowadays the demand of power supply reliability has been strongly increased as the development within power industry grows rapidly. Nevertheless such large demand requires substantial power grid to sustain. Therefore power equipment’s running and testing data which contains vast information underpins online monitoring and fault diagnosis to finally achieve state maintenance. In this paper, an intelligent fault diagnosis model for power equipment based on case-based reasoning (IFDCBR will be proposed. The model intends to discover the potential rules of equipment fault by data mining. The intelligent model constructs a condition case base of equipment by analyzing the following four categories of data: online recording data, history data, basic test data, and environmental data. SVM regression analysis was also applied in mining the case base so as to further establish the equipment condition fingerprint. The running data of equipment can be diagnosed by such condition fingerprint to detect whether there is a fault or not. Finally, this paper verifies the intelligent model and three-ratio method based on a set of practical data. The resulting research demonstrates that this intelligent model is more effective and accurate in fault diagnosis.

  9. Earthquake nucleation in a stochastic fault model of globally coupled units with interaction delays

    Science.gov (United States)

    Vasović, Nebojša; Kostić, Srđan; Franović, Igor; Todorović, Kristina

    2016-09-01

    In present paper we analyze dynamics of fault motion by considering delayed interaction of 100 all-to-all coupled blocks with rate-dependent friction law in presence of random seismic noise. Such a model sufficiently well describes a real fault motion, whose prevailing stochastic nature is implied by surrogate data analysis of available GPS measurements of active fault movement. Interaction of blocks in an analyzed model is studied as a function of time delay, observed both for dynamics of individual faults and phenomenological models. Analyzed model is examined as a system of all-to-all coupled blocks according to typical assumption of compound faults as complex of globally coupled segments. We apply numerical methods to show that there are local bifurcations from equilibrium state to periodic oscillations, with an occurrence of irregular aperiodic behavior when initial conditions are set away from the equilibrium point. Such a behavior indicates a possible existence of a bi-stable dynamical regime, due to effect of the introduced seismic noise or the existence of global attractor. The latter assumption is additionally confirmed by analyzing the corresponding mean-field approximated model. In this bi-stable regime, distribution of event magnitudes follows Gutenberg-Richter power law with satisfying statistical accuracy, including the b-value within the real observed range.

  10. A-Priori Rupture Models for Northern California Type-A Faults

    Science.gov (United States)

    Wills, Chris J.; Weldon, Ray J.; Field, Edward H.

    2008-01-01

    This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide

  11. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    Science.gov (United States)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    The theory of System Health Management (SHM) and of its operational subset Fault Management (FM) states that FM is implemented as a "meta" control loop, known as an FM Control Loop (FMCL). The FMCL detects that all or part of a system is now failed, or in the future will fail (that is, cannot be controlled within acceptable limits to achieve its objectives), and takes a control action (a response) to return the system to a controllable state. In terms of control theory, the effectiveness of each FMCL is estimated based on its ability to correctly estimate the system state, and on the speed of its response to the current or impending failure effects. This paper describes how this theory has been successfully applied on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) Program to quantitatively estimate the effectiveness of proposed abort triggers so as to select the most effective suite to protect the astronauts from catastrophic failure of the SLS. The premise behind this process is to be able to quantitatively provide the value versus risk trade-off for any given abort trigger, allowing decision makers to make more informed decisions. All current and planned crewed launch vehicles have some form of vehicle health management system integrated with an emergency launch abort system to ensure crew safety. While the design can vary, the underlying principle is the same: detect imminent catastrophic vehicle failure, initiate launch abort, and extract the crew to safety. Abort triggers are the detection mechanisms that identify that a catastrophic launch vehicle failure is occurring or is imminent and cause the initiation of a notification to the crew vehicle that the escape system must be activated. While ensuring that the abort triggers provide this function, designers must also ensure that the abort triggers do not signal that a catastrophic failure is imminent when in fact the launch vehicle can successfully achieve orbit. That is

  12. A coherency function model of ground motion at base rock corresponding to strike-slip fault

    Institute of Scientific and Technical Information of China (English)

    丁海平; 刘启方; 金星; 袁一凡

    2004-01-01

    At present, the method to study spatial variation of ground motions is statistic analysis based on dense array records such as SMART-1 array, etc. For lacking of information of ground motions, there is no coherency function model of base rock and different style site. Spatial variation of ground motions in elastic media is analyzed by deterministic method in this paper. Taking elastic half-space model with dislocation source of fault, near-field ground motions are simulated. This model takes strike-slip fault and earth media into account. A coherency function is proposed for base rock site.

  13. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-01

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  14. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-26

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  15. Detection and diagnosis of bearing faults using shift-invariant dictionary learning and hidden Markov model

    Science.gov (United States)

    Zhou, Haitao; Chen, Jin; Dong, Guangming; Wang, Ran

    2016-05-01

    Many existing signal processing methods usually select a predefined basis function in advance. This basis functions selection relies on a priori knowledge about the target signal, which is always infeasible in engineering applications. Dictionary learning method provides an ambitious direction to learn basis atoms from data itself with the objective of finding the underlying structure embedded in signal. As a special case of dictionary learning methods, shift-invariant dictionary learning (SIDL) reconstructs an input signal using basis atoms in all possible time shifts. The property of shift-invariance is very suitable to extract periodic impulses, which are typical symptom of mechanical fault signal. After learning basis atoms, a signal can be decomposed into a collection of latent components, each is reconstructed by one basis atom and its corresponding time-shifts. In this paper, SIDL method is introduced as an adaptive feature extraction technique. Then an effective approach based on SIDL and hidden Markov model (HMM) is addressed for machinery fault diagnosis. The SIDL-based feature extraction is applied to analyze both simulated and experiment signal with specific notch size. This experiment shows that SIDL can successfully extract double impulses in bearing signal. The second experiment presents an artificial fault experiment with different bearing fault type. Feature extraction based on SIDL method is performed on each signal, and then HMM is used to identify its fault type. This experiment results show that the proposed SIDL-HMM has a good performance in bearing fault diagnosis.

  16. Adaptive Fault-Tolerant Routing in 2D Mesh with Cracky Rectangular Model

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2014-01-01

    Full Text Available This paper mainly focuses on routing in two-dimensional mesh networks. We propose a novel faulty block model, which is cracky rectangular block, for fault-tolerant adaptive routing. All the faulty nodes and faulty links are surrounded in this type of block, which is a convex structure, in order to avoid routing livelock. Additionally, the model constructs the interior spanning forest for each block in order to keep in touch with the nodes inside of each block. The procedure for block construction is dynamically and totally distributed. The construction algorithm is simple and ease of implementation. And this is a fully adaptive block which will dynamically adjust its scale in accordance with the situation of networks, either the fault emergence or the fault recovery, without shutdown of the system. Based on this model, we also develop a distributed fault-tolerant routing algorithm. Then we give the formal proof for this algorithm to guarantee that messages will always reach their destinations if and only if the destination nodes keep connecting with these mesh networks. So the new model and routing algorithm maximize the availability of the nodes in networks. This is a noticeable overall improvement of fault tolerability of the system.

  17. Formal and Informal Modeling of Fault Tolerant Noc Architectures

    Directory of Open Access Journals (Sweden)

    Mostefa BELARBI

    2015-11-01

    Full Text Available The suggested new approach based on B-Event formal technics consists of suggesting aspects and constraints related to the reliability of NoC (Network-On-chip and the over-cost related to the solutions of tolerances on the faults: a design of NoC tolerating on the faults for SoC (System-on-Chip containing configurable technology FPGA (Field Programmable Gates Array, by extracting the properties of the NoC architecture. We illustrate our methodology by developing several refinements which produce QNoC (Quality of Service of Network on chip switch architecture from specification to test. We will show how B-event formalism can follow life cycle of NoC design and test: for example the code VHDL (VHSIC Hardware Description Language simulation established of certain kind of architecture can help us to optimize the architecture and produce new architecture; we can inject the new properties related to the new QNoC architecture into formal B-event specification. B-event is associated to Rodin tool environment. As case study, the last stage of refinement used a wireless network in order to generate complete test environment of the studied application.

  18. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  19. Modeling of Morelia Fault Earthquake (Mw=5.4) source fault parameters using the coseismic ground deformation and groundwater level changes data

    Science.gov (United States)

    Sarychikhina, O.; Glowacka, E.; Mellors, R. J.; Vázquez, R.

    2009-12-01

    On 24 May 2006 at 04:20 (UTC) a moderate-size (Mw=5.4) earthquake struck the Mexicali Valley, Baja California, México, roughly 30 km to the southeast of the city of Mexicali, in the vicinity of the Cerro Prieto Geothermal Field (CPGF). The earthquake occurred on the Morelia fault, one of the east-dipping normal faults in the Mexicali Valley. Locally, this earthquake was strongly felt and caused minor damage. The event created 5 km of surface rupture and down-dip displacements of up to 25-30 cm were measured at some places along this surface rupture. Associated deformation was measured by vertical crackmeter, leveling profile, and Differential Synthetic Aperture Radar Interferometry (D-InSAR). A coseismic step-like groundwater level change was detected at 7 wells. The Mw=5.4 Morelia Fault earthquake had significant scientific interest, first, because of surprisingly strong effects for an earthquake of such size; second, the variability of coseismic effects data from different ground-based and space-based techniques which allows to the better constrain of the source fault parameters. Source parameters for the earthquake were estimated using forward modeling of both surface deformation data and static volume strain change (inferred from coseismic changes in groundwater level). All ground deformation data was corrected by anthropogenic component caused by the geothermal fluid exploitation in the CPGF. Modeling was based on finite rectangular fault embedded in an elastic media. The preferred fault model has a strike, rake, and dip of (48°, -89°, 45°) and has a length of 5.2 km, width of 6.7 km, and 34 cm of uniform slip. The geodetic moment, based on the modeled fault parameters, is 1.18E+17 Nm. The model matches the observed surface deformation, expected groundwater level changes, and teleseismic moment reasonably well and explains in part why the earthquake was so strongly felt in the area.

  20. Modeling and Simulation of Transient Fault Response at Lillgrund Wind Farm when Subjected to Faults in the Connecting 130 kV Grid

    Energy Technology Data Exchange (ETDEWEB)

    Eliasson, Anders; Isabegovic, Emir

    2009-07-01

    The purpose of this thesis was to investigate what type of faults in the connecting grid should be dimensioning for future wind farms. An investigation of over and under voltages at the main transformer and the turbines inside Lillgrund wind farm was the main goal. The results will be used in the planning stage of future wind farms when performing insulation coordination and determining the protection settings. A model of the Lillgrund wind farm and a part of the connecting 130 kV grid were built in PSCAD/EMTDC. The farm consists of 48 Siemens SWT-2.3-93 2.3 MW wind turbines with full power converters. The turbines were modeled as controllable current sources providing a constant active power output up to the current limit of 1.4 pu. The transmission lines and cables were modeled as frequency dependent (phase) models. The load flows and bus voltages were verified towards a PSS/E model and the transient response was verified towards measuring data from two faults, a line to line fault in the vicinity of Barsebaeck (BBK) and a single line-to-ground fault close to Bunkeflo (BFO) substation. For the simulation, three phase to ground, single line to ground and line to line faults were applied at different locations in the connecting grid and the phase to ground voltages at different buses in the connecting grid and at turbines were studied. These faults were applied for different configurations of the farm. For single line to ground faults, the highest over voltage on a turbine was 1.22 pu (32.87 kV) due to clearing of a fault at BFO (the PCC). For line to line faults, the highest over voltage on a turbine was 1.59 pu (42.83 kV) at the beginning of a fault at KGE one bus away from BFO. Both these cases were when all radials were connected and the turbines ran at full power. The highest over voltage observed at Lillgrund was 1.65 pu (44.45 kV). This over voltage was caused by a three phase to ground fault applied at KGE and occurred at the beginning of the fault and when

  1. Investigating the Effects of Stress Interaction Using a Cellular-automaton Based Model in Fault Networks of Varying Complexity.

    Science.gov (United States)

    Hetherington, A. P.; Steacy, S.; McCloskey, J.

    2007-12-01

    Seismicity spatial and temporal patterns are strongly influenced by stress interaction between faults. However the effects of such interaction on earthquake statistics is not yet well understood. Computer models provide accurate, large and complete datasets to investigate this issue and also have the benefit of allowing direct comparison of seismicity behavior in time and space in networks, with and without fault interaction. We investigate the effect of such interaction on modeled real-world fault networks of varying complexity using a cellular-automata based model. Each 3-D fault within the fault network is modeled by a discrete cellular automaton. The cell size is 1 km square which allows for a minimum earthquake size of approximately Mw=4. The cell strength is distributed fractally across each fault and all cells are loaded by a remote tectonic stressing rate. When the stress on a cell exceeds its strength, the cell fails and stress is transferred to its nearest neighbors which may in turn cause them to break allowing the earthquake to grow. These stress transfer rules allow realistic stress concentrations to develop at the boundary of the rupture. If the extent of the rupture exceeds a user defined minimum length, and if interaction between faults is allowed, a boundary element method is used to calculate stress transfer to neighboring faults. Here we present results from four simulated fault networks based on active faults in the San Francisco Bay Area, California, the Northern Anatolian Fault, Turkey, Southern California, and the Marlborough Fault System, South Island, New Zealand. These are chosen for their varying level of fault complexity and we examine both interacting and non-interacting models in terms of their b-value and recurrence intervals for each region. Results will be compared and discussed.

  2. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  3. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  4. Risk management of PPP project in the preparation stage based on Fault Tree Analysis

    Science.gov (United States)

    Xing, Yuanzhi; Guan, Qiuling

    2017-03-01

    The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.

  5. Modeling of a Switched Reluctance Motor under Stator Winding Fault Condition

    DEFF Research Database (Denmark)

    Chen, Hao; Han, G.; Yan, Wei

    2016-01-01

    A new method for modeling stator winding fault with one shorted coil in a switched reluctance motor (SRM) is presented in this paper. The method is based on artificial neural network (ANN), incorporated with a simple analytical model in electromagnetic analysis to estimate the flux...

  6. Model-based fault detection for generator cooling system in wind turbines using SCADA data

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Kinnaert, Michel

    2016-01-01

    In this work, an early fault detection system for the generator cooling of wind turbines is presented and tested. It relies on a hybrid model of the cooling system. The parameters of the generator model are estimated by an extended Kalman filter. The estimated parameters are then processed by an ...

  7. Three-dimensional numerical modelling of static and transient Coulomb stress changes on intra-continental dip-slip faults

    OpenAIRE

    Meike Bagge

    2017-01-01

    Earthquakes on intra-continental faults pose substantial seismic hazard to populated areas. The interaction of faults is an important mechanism of earthquake triggering and can be investigated by the calculation of Coulomb stress changes. Using three-dimensional finite-element models, co- and postseismic stress changes and the effect of viscoelastic relaxation on dip-slip faults are investigated. The models include elastic and viscoelastic layers, gravity, ongoing regional deformation as well...

  8. How realistic are flat-ramp-flat fault kinematic models? Comparing mechanical and kinematic models

    Science.gov (United States)

    Cruz, L.; Nevitt, J. M.; Hilley, G. E.; Seixas, G.

    2015-12-01

    Rock within the upper crust appears to deform according to elasto-plastic constitutive rules, but structural geologists often employ kinematic descriptions that prescribe particle motions irrespective of these physical properties. In this contribution, we examine the range of constitutive properties that are approximately implied by kinematic models by comparing predicted deformations between mechanical and kinematic models for identical fault geometric configurations. Specifically, we use the ABAQUS finite-element package to model a fault-bend-fold geometry using an elasto-plastic constitutive rule (the elastic component is linear and the plastic failure occurs according to a Mohr-Coulomb failure criterion). We varied physical properties in the mechanical model (i.e., Young's modulus, Poisson ratio, cohesion yield strength, internal friction angle, sliding friction angle) to determine the impact of each on the observed deformations, which were then compared to predictions of kinematic models parameterized with identical geometries. We found that a limited sub-set of physical properties were required to produce deformations that were similar to those predicted by the kinematic models. Specifically, mechanical models with low cohesion are required to allow the kink at the bottom of the flat-ramp geometry to remain stationary over time. Additionally, deformations produced by steep ramp geometries (30 degrees) are difficult to reconcile between the two types of models, while lower slope gradients better conform to the geometric assumptions. These physical properties may fall within the range of those observed in laboratory experiments, suggesting that particle motions predicted by kinematic models may provide an approximate representation of those produced by a physically consistent model under some circumstances.

  9. Sensor Network Data Fault Detection using Hierarchical Bayesian Space-Time Modeling

    OpenAIRE

    Ni, Kevin; Pottie, G J

    2009-01-01

    We present a new application of hierarchical Bayesian space-time (HBST) modeling: data fault detection in sensor networks primarily used in environmental monitoring situations. To show the effectiveness of HBST modeling, we develop a rudimentary tagging system to mark data that does not fit with given models. Using this, we compare HBST modeling against first order linear autoregressive (AR) modeling, which is a commonly used alternative due to its simplicity. We show that while HBST is mo...

  10. Modeling crustal deformation near active faults and volcanic centers: a catalog of deformation models and modeling approaches

    Science.gov (United States)

    Battaglia, Maurizio; ,; Peter, F.; Murray, Jessica R.

    2013-01-01

    This manual provides the physical and mathematical concepts for selected models used to interpret deformation measurements near active faults and volcanic centers. The emphasis is on analytical models of deformation that can be compared with data from the Global Positioning System (GPS) receivers, Interferometric synthetic aperture radar (InSAR), leveling surveys, tiltmeters and strainmeters. Source models include pressurized spherical, ellipsoidal, and horizontal penny-shaped geometries in an elastic, homogeneous, flat half-space. Vertical dikes and faults are described following the mathematical notation for rectangular dislocations in an elastic, homogeneous, flat half-space. All the analytical expressions were verified against numerical models developed by use of COMSOL Multyphics, a Finite Element Analysis software (http://www.comsol.com). In this way, typographical errors present were identified and corrected. Matlab scripts are also provided to facilitate the application of these models.

  11. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    Science.gov (United States)

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor.

  12. Robust fault diagnosis for non-Gaussian stochastic systems based on the rational square-root approximation model

    Institute of Scientific and Technical Information of China (English)

    YAO LiNa; WANG Hong

    2008-01-01

    The task of robust fault detection and diagnosis of stochastic distribution control (SDC) systems with uncertainties is to use the measured input and the system output PDFs to still obtain possible faults information of the system. Using the ra-tional square-root B-spline model to represent the dynamics between the output PDF and the input, in this paper, a robust nonlinear adaptive observer-based fault diagnosis algorithm is presented to diagnose the fault in the dynamic part of such systems with model uncertainties. When certain conditions are satisfied, the weight vector of the rational square-root B-spline model proves to be bounded. Conver-gency analysis is performed for the error dynamic system raised from robust fault detection and fault diagnosis phase. Computer simulations are given to demon-strate the effectiveness of the proposed algorithm.

  13. Effective confidence interval estimation of fault-detection process of software reliability growth models

    Science.gov (United States)

    Fang, Chih-Chiang; Yeh, Chun-Wu

    2016-09-01

    The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.

  14. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    Science.gov (United States)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  15. Modeling Surface Subsidence from Hydrocarbon Production and Induced Fault Slip in the Louisiana Coastal Zone

    Science.gov (United States)

    Mallman, E. P.; Zoback, M. D.

    2005-12-01

    Coastal wetland loss in southern Louisiana poses a great threat to the ecological and economic stability of the region. In the region of interest, wetland loss is a combination of land subsidence along with eustatic sea level rise, sediment accumulation, erosion, filling and drainage. More than half of the land loss in coastal Louisiana between 1932 and 1990 was related to subsidence due to the complicated interaction of multiple natural and anthropogenic processes, including compaction of Holocene sediments in the Mississippi River delta, lithospheric flexure as a response to sediment loading, and natural episodic movement along regional growth faults. In addition to these mechanisms, it has recently been suggested that subsurface oil and gas production may be a large contributing factor to surface subsidence in the Louisiana Coastal Zone. We model the effect of fluid withdrawal from oil and gas fields in the Barataria Bay region of the Louisiana Coastal Zone on surface subsidence and its potential role in inducing fault slip on the region's growth faults. Along the western edge of Barataria Basin is a first-order leveling line to constrain our model of land subsidence. The rates for this leveling line show numerous locations of increased subsidence rate over the surrounding area, which tend to be located over the large oil and gas fields in the region. However, also located in the regions of high subsidence rate and oil and gas fields are the regional normal faults. Slip on these growth faults is important in two contexts: Regional subsidence would be expected along these faults as a natural consequence of naturally-occurring slip over time. In addition, slip along the faults can be exacerbated by production such that surface subsidence would be localized near the oil and gas fields. Using pressure data from wells in the Valentine, Golden Meadow, and Leeville oil and gas fields we estimate the amount of compaction of the various reservoirs, the resulting surface

  16. A Fault-based Crustal Deformation Model for UCERF3 and Its Implication to Seismic Hazard Analysis

    Science.gov (United States)

    Zeng, Y.; Shen, Z.

    2012-12-01

    We invert GPS data to determine slip rates on major California faults using a fault-based crustal deformation model with geological slip rate constraints. The model assumes buried elastic dislocations across the region using fault geometries defined by the Uniform California Earthquake Rupture Forecast version 3 (UCERF3) project with fault segments slipping beneath their locking depths. GPS observations across California and neighboring states were obtained from the UNAVCO western US GPS velocity model and edited by the SCEC UCERF3 geodetic deformation working group. The geologic slip rates and fault style constraints were compiled by the SCEC UCERF3 geologic deformation working group. Continuity constraints are imposed on slips among adjacent fault segments to regulate slip variability and to simulate block-like motion. Our least-squares inversion shows that slip rates along the northern San Andreas fault system agree well with the geologic estimates provided by UCERF3, and slip rates for the Calaveras-Hayward-Maacama fault branch and the Greenville-Great Valley fault branch are slightly higher than that of the UCERF3 geologic model. The total slip rates across transects of the three fault branches in Northern California amount to 39 mm/yr. Slip rates determined for the Garlock fault closely match geologic rates. Slip rates for the Coachella Valley and Brawley segment of the San Andreas are nearly twice that of the San Jacinto fault branch. For the off-coast faults along the San Gregorio, Hosgri, Catalina, and San Clemente faults, slip rates are near their geologic lower bounds. Comparing with the regional geologic slip rate estimates, the GPS based model shows a significant decrease of 6-14 mm/yr in slip rates along the San Andreas fault system from the central California creeping section through the Mojave to the San Bernardino Mountain segments, whereas the model indicates significant increase of 1-3 mm/yr in slip-rates for faults along the east California

  17. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  18. Fault and dyke detectability in high resolution seismic surveys for coal: a view from numerical modelling*

    Science.gov (United States)

    Zhou, Binzhong 13Hatherly, Peter

    2014-10-01

    Modern underground coal mining requires certainty about geological faults, dykes and other structural features. Faults with throws of even just a few metres can create safety issues and lead to costly delays in mine production. In this paper, we use numerical modelling in an ideal, noise-free environment with homogeneous layering to investigate the detectability of small faults by seismic reflection surveying. If the layering is horizontal, faults with throws of 1/8 of the wavelength should be detectable in a 2D survey. In a coal mining setting where the seismic velocity of the overburden ranges from 3000 m/s to 4000 m/s and the dominant seismic frequency is ~100 Hz, this corresponds to a fault with a throw of 4-5 m. However, if the layers are dipping or folded, the faults may be more difficult to detect, especially when their throws oppose the trend of the background structure. In the case of 3D seismic surveying we suggest that faults with throws as small as 1/16 of wavelength (2-2.5 m) can be detectable because of the benefits offered by computer-aided horizon identification and the improved spatial coherence in 3D seismic surveys. With dykes, we find that Berkhout's definition of the Fresnel zone is more consistent with actual experience. At a depth of 500 m, which is typically encountered in coal mining, and a 100 Hz dominant seismic frequency, dykes less than 8 m in width are undetectable, even after migration.

  19. Rapid Assessment of Earthquakes with Radar and Optical Geodetic Imaging and Finite Fault Models (Invited)

    Science.gov (United States)

    Fielding, E. J.; Sladen, A.; Simons, M.; Rosen, P. A.; Yun, S.; Li, Z.; Avouac, J.; Leprince, S.

    2010-12-01

    Earthquake responders need to know where the earthquake has caused damage and what is the likely intensity of damage. The earliest information comes from global and regional seismic networks, which provide the magnitude and locations of the main earthquake hypocenter and moment tensor centroid and also the locations of aftershocks. Location accuracy depends on the availability of seismic data close to the earthquake source. Finite fault models of the earthquake slip can be derived from analysis of seismic waveforms alone, but the results can have large errors in the location of the fault ruptures and spatial distribution of slip, which are critical for estimating the distribution of shaking and damage. Geodetic measurements of ground displacements with GPS, LiDAR, or radar and optical imagery provide key spatial constraints on the location of the fault ruptures and distribution of slip. Here we describe the analysis of interferometric synthetic aperture radar (InSAR) and sub-pixel correlation (or pixel offset tracking) of radar and optical imagery to measure ground coseismic displacements for recent large earthquakes, and lessons learned for rapid assessment of future events. These geodetic imaging techniques have been applied to the 2010 Leogane, Haiti; 2010 Maule, Chile; 2010 Baja California, Mexico; 2008 Wenchuan, China; 2007 Tocopilla, Chile; 2007 Pisco, Peru; 2005 Kashmir; and 2003 Bam, Iran earthquakes, using data from ESA Envisat ASAR, JAXA ALOS PALSAR, NASA Terra ASTER and CNES SPOT5 satellite instruments and the NASA/JPL UAVSAR airborne system. For these events, the geodetic data provided unique information on the location of the fault or faults that ruptured and the distribution of slip that was not available from the seismic data and allowed the creation of accurate finite fault source models. In many of these cases, the fault ruptures were on previously unknown faults or faults not believed to be at high risk of earthquakes, so the area and degree of

  20. Characterized Fault Model of Scenario Earthquake Caused by the Itoigawa-Shizuoka Tectonic Line Fault Zone in Central Japan and Strong Ground Motion Prediction

    Science.gov (United States)

    Sato, T.; Dan, K.; Irikura, K.; Furumura, M.

    2001-12-01

    Based on the existing ideas on characterizing complex fault rupture process, we constructed four different characterized fault models for predicting strong motions from the most likely scenario earthquake along the active fault zone of the Itoigawa-Shizuoka Tectonic Line in central Japan. The Headquarters for Earthquake Research Promotion in Japanese government (2001) estimated that the earthquake (8 +/- 0.5) has the total fault length of 112 km with four segments. We assumed that the characterized fault model consisted of two regions: asperity and background (Somerville et al., 1999; Irikura, 2000; Dan et al., 2000). The main differences in the four fault models were 1) how to determine a seismic moment Mo from a fault rupture area S, 2) number of asperities N, 3) how to determine a stress parameter σ , and 4) fmax. We calculated broadband strong motions at three stations near the fault by a hybrid method of the semi-empirical and theoretical approaches. A comparison between the results from the hybrid method and those from empirical attenuation relations showed that the hybrid method using the characterized fault model could evaluate near-fault rupture directivity effects more reliably than the empirical attenuation relations. We also discussed the characterized fault models and the strong motion characteristics. The Mo extrapolated from the empirical Mo-S relation by Somerville et al. (1999) was a half of that determined from the mean value of the Wells and Coppersmith (1994) data. The latter Mo was consistent with that for the 1891 Nobi, Japan, earthquake whose fault length was almost the same as the length of the target earthquake. In addition, the fault model using the latter Mo produced a slip amount of about 6 m on the largest asperity, which was consistent with the displacement of 6 m to 9 m per event obtained from a trench survey. High-frequency strong motions were greatly influenced by the σ for the asperities (188 bars, 246 bars, 108 bars, and 134

  1. Discovery of previously unrecognised local faults in London, UK, using detailed 3D geological modelling

    Science.gov (United States)

    Aldiss, Don; Haslam, Richard

    2013-04-01

    In parts of London, faulting introduces lateral heterogeneity to the local ground conditions, especially where construction works intercept the Palaeogene Lambeth Group. This brings difficulties to the compilation of a ground model that is fully consistent with the ground investigation data, and so to the design and construction of engineering works. However, because bedrock in the London area is rather uniform at outcrop, and is widely covered by Quaternary deposits, few faults are shown on the geological maps of the area. This paper discusses a successful resolution of this problem at a site in east central London, where tunnels for a new underground railway station are planned. A 3D geological model was used to provide an understanding of the local geological structure, in faulted Lambeth Group strata, that had not been possible by other commonly-used methods. This model includes seven previously unrecognised faults, with downthrows ranging from about 1 m to about 12 m. The model was constructed in the GSI3D geological modelling software using about 145 borehole records, including many legacy records, in an area of 850 m by 500 m. The basis of a GSI3D 3D geological model is a network of 2D cross-sections drawn by a geologist, generally connecting borehole positions (where the borehole records define the level of the geological units that are present), and outcrop and subcrop lines for those units (where shown by a geological map). When the lines tracing the base of each geological unit within the intersecting cross-sections are complete and mutually consistent, the software is used to generate TIN surfaces between those lines, so creating a 3D geological model. Even where a geological model is constructed as if no faults were present, changes in apparent dip between two data points within a single cross-section can indicate that a fault is present in that segment of the cross-section. If displacements of similar size with the same polarity are found in a series

  2. Model-based robust estimation and fault detection for MEMS-INS/GPS integrated navigation systems

    Directory of Open Access Journals (Sweden)

    Miao Lingjuan

    2014-08-01

    Full Text Available In micro-electro-mechanical system based inertial navigation system (MEMS-INS/global position system (GPS integrated navigation systems, there exist unknown disturbances and abnormal measurements. In order to obtain high estimation accuracy and enhance detection sensitivity to faults in measurements, this paper deals with the problem of model-based robust estimation (RE and fault detection (FD. A filter gain matrix and a post-filter are designed to obtain a RE and FD algorithm with current measurements, which is different from most of the existing priori filters using measurements in one-step delay. With the designed filter gain matrix, the H-infinity norm of the transfer function from noise inputs to estimation error outputs is limited within a certain range; with the designed post-filter, the residual signal is robust to disturbances but sensitive to faults. Therefore, the algorithm can guarantee small estimation errors in the presence of disturbances and have high sensitivity to faults. The proposed method is evaluated in an integrated navigation system, and the simulation results show that it is more effective in position estimation and fault signal detection than priori RE and FD algorithms.

  3. Model-based robust estimation and fault detection for MEMS-INS/GPS integrated navigation systems

    Institute of Scientific and Technical Information of China (English)

    Miao Lingjuan; Shi Jing

    2014-01-01

    In micro-electro-mechanical system based inertial navigation system (MEMS-INS)/global position system (GPS) integrated navigation systems, there exist unknown disturbances and abnormal measurements. In order to obtain high estimation accuracy and enhance detection sensitivity to faults in measurements, this paper deals with the problem of model-based robust esti-mation (RE) and fault detection (FD). A filter gain matrix and a post-filter are designed to obtain a RE and FD algorithm with current measurements, which is different from most of the existing pri-ori filters using measurements in one-step delay. With the designed filter gain matrix, the H-infinity norm of the transfer function from noise inputs to estimation error outputs is limited within a certain range;with the designed post-filter, the residual signal is robust to disturbances but sensitive to faults. Therefore, the algorithm can guarantee small estimation errors in the presence of distur-bances and have high sensitivity to faults. The proposed method is evaluated in an integrated navigation system, and the simulation results show that it is more effective in position estimation and fault signal detection than priori RE and FD algorithms.

  4. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model

    Science.gov (United States)

    Daout, S.; Barbot, S.; Peltzer, G.; Doin, M.-P.; Liu, Z.; Jolivet, R.

    2016-11-01

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  5. Modeling the Non Linear Behavior of a Magnetic Fault Current Limiter

    Directory of Open Access Journals (Sweden)

    P. R. Wilson

    2015-11-01

    Full Text Available Fault Current Limiters are used in a wide array of applications from small circuit protection at low power levels to large scale high power applications which require superconductors and complex control circuitry. One advantage of  passive fault current limiters (FCL is the automatic behavior that is dependent on the intrinsic properties of the circuit elements rather than on a complex feedback control scheme making this approach attractive for low cost applications and also where reliability is critical. This paper describes the behavioral modeling of a passive Magnetic FCL and its potential application in practical circuits.

  6. Prognosticating fault development rate in wind turbine generator bearings using local trend models

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Palou, Jonel; Sweeney, Christian Walsted;

    2016-01-01

    Generator bearing defects, e.g. ball, inner and outer race defects, are ranked among the most frequent mechanical failures encountered in wind turbines. Diagnosis and prognosis of bearing faults can be successfully implemented using vibration based condition monitoring systems, where tracking...... the signal energy between 10Hz to 1000Hz is utilized as feature to characterize the severity of developing bearing faults. Furthermore, local trend models are employed to predict the progression of bearing defects from a vibration standpoint in accordance with the limits suggested in ISO 10816. Predictions...... of vibration trends from multi-megawatt wind turbine generators are presented, showing the effectiveness of the suggested approach on the calculation of the RUL and fault progression rate....

  7. Probabilistic SDG model description and fault inference for large-scale complex systems

    Institute of Scientific and Technical Information of China (English)

    Yang Fan; Xiao Deyun

    2006-01-01

    Large-scale complex systems have the feature of including large amount of variables that have complex relationships, for which signed directed graph (SDG) model could serve as a significant tool by describing the causal relationships among variables. Although qualitative SDG expresses the causing effects between variables easily and clearly, it has many disadvantages or limitations. Probabilistic SDG proposed in the article describes deliver relationships among faults and variables by conditional probabilities, which contains more information and performs more applicability. The article introduces the concepts and construction approaches of probabilistic SDG, and presents the inference approaches aiming at fault diagnosis in this framework, i.e. Bayesian inference with graph elimination or junction tree algorithms to compute fault probabilities. Finally, the probabilistic SDG of a typical example of 65t/h boiler system is given.

  8. Numerical model of the glacially-induced intraplate earthquakes and faults formation

    Science.gov (United States)

    Petrunin, Alexey; Schmeling, Harro

    2016-04-01

    According to the plate tectonics, main earthquakes are caused by moving lithospheric plates and are located mainly at plate boundaries. However, some of significant seismic events may be located far away from these active areas. The nature of the intraplate earthquakes remains unclear. It is assumed, that the triggering of seismicity in the eastern Canada and northern Europe might be a result of the glacier retreat during a glacial-interglacial cycle (GIC). Previous numerical models show that the impact of the glacial loading and following isostatic adjustment is able to trigger seismicity in pre-existing faults, especially during deglaciation stage. However this models do not explain strong glaciation-induced historical earthquakes (M5-M7). Moreover, numerous studies report connection of the location and age of major faults in the regions undergone by glaciation during last glacial maximum with the glacier dynamics. This probably imply that the GIC might be a reason for the fault system formation. Our numerical model provides analysis of the strain-stress evolution during the GIC using the finite volume approach realised in the numerical code Lapex 2.5D which is able to operate with large strains and visco-elasto-plastic rheology. To simulate self-organizing faults, the damage rheology model is implemented within the code that makes possible not only visualize faulting but also estimate energy release during the seismic cycle. The modeling domain includes two-layered crust, lithospheric mantle and the asthenosphere that makes possible simulating elasto-plastic response of the lithosphere to the glaciation-induced loading (unloading) and viscous isostatic adjustment. We have considered three scenarios for the model: horizontal extension, compression and fixed boundary conditions. Modeling results generally confirm suppressing seismic activity during glaciation phases whereas retreat of a glacier triggers earthquakes for several thousand years. Tip of the glacier

  9. A Structural Model Decomposition Framework for Systems Health Management

    Science.gov (United States)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  10. Fault Detection for Shipboard Monitoring – Volterra Kernel and Hammerstein Model Approaches

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2009-01-01

    In this paper nonlinear fault detection for in-service monitoring and decision support systems for ships will be presented. The ship is described as a nonlinear system, and the stochastic wave elevation and the associated ship responses are conveniently modelled in frequency domain...

  11. Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators

    Directory of Open Access Journals (Sweden)

    Ilias Ouachtouk

    2016-01-01

    Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.

  12. Natural Environment Modeling and Fault-Diagnosis for Automated Agricultural Vehicle

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2008-01-01

    This paper presents results for an automatic navigation system for agricultural vehicles. The system uses stereo-vision, inertial sensors and GPS. Special emphasis has been placed on modeling the natural environment in conjunction with a fault-tolerant navigation system. The results are exemplified...

  13. Fault-tolerant and QoS based Network Layer for Security Management

    Directory of Open Access Journals (Sweden)

    Mohamed Naceur Abdelkrim

    2013-07-01

    Full Text Available Wireless sensor networks have profound effects on many application fields like security management which need an immediate, fast and energy efficient route. In this paper, we define a fault-tolerant and QoS based network layer for security management of chemical products warehouse which can be classified as real-time and mission critical application. This application generate routine data packets and alert packets caused by unusual events which need a high reliability, short end to end delay and low packet loss rate constraints. After each node compute his hop count and build his neighbors table in the initialization phase, packets can be routed to the sink. We use FELGossiping protocol for routine data packets and node-disjoint multipath routing protocol for alert packets. Furthermore, we utilize the information gathering phase of FELGossiping to update the neighbors table and detect the failed nodes, and we adapt the network topology changes by rerun the initialization phase when chemical units were added or removed from the warehouse. Analysis shows that the network layer is energy efficient and can meet the QoS constraints of unusual events packets.

  14. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  15. Two-Dimensional Boundary Element Method Application for Surface Deformation Modeling around Lembang and Cimandiri Fault, West Java

    Science.gov (United States)

    Mahya, M. J.; Sanny, T. A.

    2017-04-01

    Lembang and Cimandiri fault are active faults in West Java that thread people near the faults with earthquake and surface deformation risk. To determine the deformation, GPS measurements around Lembang and Cimandiri fault was conducted then the data was processed to get the horizontal velocity at each GPS stations by Graduate Research of Earthquake and Active Tectonics (GREAT) Department of Geodesy and Geomatics Engineering Study Program, ITB. The purpose of this study is to model the displacement distribution as deformation parameter in the area along Lembang and Cimandiri fault using 2-dimensional boundary element method (BEM) using the horizontal velocity that has been corrected by the effect of Sunda plate horizontal movement as the input. The assumptions that used at the modeling stage are the deformation occurs in homogeneous and isotropic medium, and the stresses that acted on faults are in elastostatic condition. The results of modeling show that Lembang fault had left-lateral slip component and divided into two segments. A lineament oriented in southwest-northeast direction is observed near Tangkuban Perahu Mountain separating the eastern and the western segments of Lembang fault. The displacement pattern of Cimandiri fault shows that Cimandiri fault is divided into the eastern segment with right-lateral slip component and the western segment with left-lateral slip component separated by a northwest-southeast oriented lineament at the western part of Gede Pangrango Mountain. The displacement value between Lembang and Cimandiri fault is nearly zero indicating that Lembang and Cimandiri fault are not connected each other and this area is relatively safe for infrastructure development.

  16. Smac–Fdi: A Single Model Active Fault Detection and Isolation System for Unmanned Aircraft

    Directory of Open Access Journals (Sweden)

    Ducard Guillaume J.J.

    2015-03-01

    Full Text Available This article presents a single model active fault detection and isolation system (SMAC-FDI which is designed to efficiently detect and isolate a faulty actuator in a system, such as a small (unmanned aircraft. This FDI system is based on a single and simple aerodynamic model of an aircraft in order to generate some residuals, as soon as an actuator fault occurs. These residuals are used to trigger an active strategy based on artificial exciting signals that searches within the residuals for the signature of an actuator fault. Fault isolation is carried out through an innovative mechanism that does not use the previous residuals but the actuator control signals directly. In addition, the paper presents a complete parameter-tuning strategy for this FDI system. The novel concepts are backed-up by simulations of a small unmanned aircraft experiencing successive actuator failures. The robustness of the SMAC-FDI method is tested in the presence of model uncertainties, realistic sensor noise and wind gusts. Finally, the paper concludes with a discussion on the computational efficiency of the method and its ability to run on small microcontrollers.

  17. Simulation of Fault Arc Based on Different Radiation Models in a Closed Tank

    Science.gov (United States)

    Li, Mei; Zhang, Junpeng; Hu, Yang; Zhang, Hantian; Wu, Yifei

    2016-05-01

    This paper focuses on the simulation of a fault arc in a closed tank based on the magneto-hydrodynamic (MHD) method, in which a comparative study of three radiation models, including net emission coefficients (NEC), semi-empirical model based on NEC as well as the P1 model, is developed. The pressure rise calculated by the three radiation models are compared to the measured results. Particularly when the semi-empirical model is used, the effect of different boundary temperatures of the re-absorption layer in the semi-empirical model on pressure rise is concentrated on. The results show that the re-absorption effect in the low-temperature region affects radiation transfer of fault arcs evidently, and thus the internal pressure rise. Compared with the NEC model, P1 and the semi-empirical model with 0.7pressure rise of the fault arc, where is an adjusted parameter involving the boundary temperature of the re-absorption region in the semi-empirical model. supported by National Key Basic Research Program of China (973 Program) (No. 2015CB251002), National Natural Science Foundation of China (Nos. 51221005, 51177124), the Fundamental Research Funds for the Central Universities, the Program for New Century Excellent Talents in University and Shaanxi Province Natural Science Foundation of China (No. 2013JM-7010)

  18. Simulation of Fault Arc Based on Different Radiation Models in a Closed Tank

    Institute of Scientific and Technical Information of China (English)

    LI Mei; ZHANG Junpeng; HU Yang; ZHANG Hantian; WU Yifei

    2016-01-01

    This paper focuses on the simulation of a fault arc in a closed tank based on the magneto-hydrodynamic (MHD) method,in which a comparative study of three radiation models,including net emission coefficients (NEC),semi-empirical model based on NEC as well as the P1 model,is developed.The pressure rise calculated by the three radiation models are compared to the measured results.Particularly when the semi-empirical model is used,the effect of different boundary temperatures of the re-absorption layer in the semi-empirical model on pressure rise is concentrated on.The results show that the re-absorption effect in the low-temperature region affects radiation transfer of fault arcs evidently,and thus the internal pressure rise.Compared with the NEC model,P1 and the semi-empirical model with 0.7 < α < 0.83 are more suitable to calculate the pressure rise of the fault arc,where is an adjusted parameter involving the boundary temperature of the re-absorption region in the semi-empirical model.

  19. Three-dimensional numerical modeling of the influence of faults on groundwater flow at Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Andrew J.B. [Univ. of California, Berkeley, CA (United States)

    1999-06-01

    Numerical simulations of groundwater flow at Yucca Mountain, Nevada are used to investigate how the faulted hydrogeologic structure influences groundwater flow from a proposed high-level nuclear waste repository. Simulations are performed using a 3-D model that has a unique grid block discretization to accurately represent the faulted geologic units, which have variable thicknesses and orientations. Irregular grid blocks enable explicit representation of these features. Each hydrogeologic layer is discretized into a single layer of irregular and dipping grid blocks, and faults are discretized such that they are laterally continuous and displacement varies along strike. In addition, the presence of altered fault zones is explicitly modeled, as appropriate. The model has 23 layers and 11 faults, and approximately 57,000 grid blocks and 200,000 grid block connections. In the past, field measurement of upward vertical head gradients and high water table temperatures near faults were interpreted as indicators of upwelling from a deep carbonate aquifer. Simulations show, however, that these features can be readily explained by the geometry of hydrogeologic layers, the variability of layer permeabilities and thermal conductivities, and by the presence of permeable fault zones or faults with displacement only. In addition, a moderate water table gradient can result from fault displacement or a laterally continuous low permeability fault zone, but not from a high permeability fault zone, as others postulated earlier. Large-scale macrodispersion results from the vertical and lateral diversion of flow near the contact of high and low permeability layers at faults, and from upward flow within high permeability fault zones. Conversely, large-scale channeling can occur due to groundwater flow into areas with minimal fault displacement. Contaminants originating at the water table can flow in a direction significantly different than that of the water table gradient, and isolated

  20. Physical Models of a Locked-to-Creeping Transition Along a Strike-Slip Fault: Comparison with the San Andreas Fault System in Central California

    Science.gov (United States)

    Ross, E. O.; Titus, S.; Reber, J. E.

    2016-12-01

    In central California, the plate boundary geometry of the San Andreas is relatively simple with several sub-parallel faults; however, slip behavior along the San Andreas fault changes from locked to creeping. In the SE, the fault is locked along the Carrizo segment, which last ruptured in the 1857 Fort Tejon earthquake. Towards the NW, the slip rates increase from 0 to 28 mm/yr along the creeping segment, before decreasing towards the locked segment that last ruptured in the 1906 San Francisco earthquake. Near the southern transition from locked behavior to creeping behavior, the GPS velocity field and simple elastic models predict a region of contraction NE of the fault. This region coincides with numerous well-developed folds in the borderlands as well as a series of off-fault earthquakes in the 1980s. Similarly, a region of extension is predicted SW of the transition. This area coincides with a large basin near the town of Paso Robles. In order to understand the development of these regions of contraction and extension and characterize the orientation of vectors in the velocity field, we model the transition from locked to creeping behavior using physical experiments. The model consists of a layer of silicone (PDMS SGM-36) and a layer of wet kaolin, mimicking the ductile lower crust and brittle upper crust. We cut and lubricate the silicone along one section of the basement fault, simulating creeping behavior, while leaving the rest of the silicone intact across the fault to represent the locked portion. With this simple alteration to experimental conditions, we are consistently able to produce a mountain-and-basin pair that forms on either side of the transition at a deformation speed of 0.22mm/sec. To compare the physical model's results to the observed velocity field, we use particle image velocimetry software in conjunction with strain computation software (SSPX). PIV analysis shows highly reproducible vectors, allowing us to examine off-fault deformation

  1. Modified Quasi-Steady State Model of DC System for Transient Stability Simulation under Asymmetric Faults

    Directory of Open Access Journals (Sweden)

    Jun Liu

    2015-01-01

    Full Text Available As using the classical quasi-steady state (QSS model could not be able to accurately simulate the dynamic characteristics of DC transmission and its controlling systems in electromechanical transient stability simulation, when asymmetric fault occurs in AC system, a modified quasi-steady state model (MQSS is proposed. The model firstly analyzes the calculation error induced by classical QSS model under asymmetric commutation voltage, which is mainly caused by the commutation voltage zero offset thus making inaccurate calculation of the average DC voltage and the inverter extinction advance angle. The new MQSS model calculates the average DC voltage according to the actual half-cycle voltage waveform on the DC terminal after fault occurrence, and the extinction advance angle is also derived accordingly, so as to avoid the negative effect of the asymmetric commutation voltage. Simulation experiments show that the new MQSS model proposed in this paper has higher simulation precision than the classical QSS model when asymmetric fault occurs in the AC system, by comparing both of them with the results of detailed electromagnetic transient (EMT model of the DC transmission and its controlling system.

  2. Fault maintenance trees: reliability centered maintenance via statistical model checking

    NARCIS (Netherlands)

    Ruijters, Enno; Guck, Dennis; Drolenga, Peter; Stoelinga, Mariëlle

    2016-01-01

    The current trend in infrastructural asset management is towards risk-based (a.k.a. reliability centered) maintenance, promising better performance at lower cost. By maintaining crucial components more intensively than less important ones, dependability increases while costs decrease. This requires

  3. Fault maintenance trees: reliability centered maintenance via statistical model checking

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Guck, Dennis; Drolenga, Peter; Stoelinga, Mariëlle Ida Antoinette

    The current trend in infrastructural asset management is towards risk-based (a.k.a. reliability centered) maintenance, promising better performance at lower cost. By maintaining crucial components more intensively than less important ones, dependability increases while costs decrease. This requires

  4. Empirical Verification of Fault Models for FPGAs Operating in the Subcritical Voltage Region

    DEFF Research Database (Denmark)

    Birklykke, Alex Aaen; Koch, Peter; Prasad, Ramjee

    2013-01-01

    fault models might provide insight that would allow subcritical scaling by changing digital design practices or by simply accepting errors if possible. To facilitate further work in this direction, we present probabilistic error models that allow us to link error behavior with statistical properties...... of the binary signals, and based on a two-FPGA setup we experimentally verify the correctness of candidate models. For all experiments, the observed error rates exhibit a polynomial dependency on outcome probability of the binary inputs, which corresponds to the behavior predicted by the proposed timing error...... model. Furthermore, our results show that the fault mechanism is fully deterministic - mimicking temporary stuck-at errors. As a result, given knowledge about a given signal, errors are fully predictable in the subcritical voltage region....

  5. Early FDI Based on Residuals Design According to the Analysis of Models of Faults: Application to DAMADICS

    Directory of Open Access Journals (Sweden)

    Yahia Kourd

    2011-01-01

    Full Text Available The increased complexity of plants and the development of sophisticated control systems have encouraged the parallel development of efficient rapid fault detection and isolation (FDI systems. FDI in industrial system has lately become of great significance. This paper proposes a new technique for short time fault detection and diagnosis in nonlinear dynamic systems with multi inputs and multi outputs. The main contribution of this paper is to develop a FDI schema according to reference models of fault-free and faulty behaviors designed with neural networks. Fault detection is obtained according to residuals that result from the comparison of measured signals with the outputs of the fault free reference model. Then, Euclidean distance from the outputs of models of faults to the measurements leads to fault isolation. The advantage of this method is to provide not only early detection but also early diagnosis thanks to the parallel computation of the models of faults and to the proposed decision algorithm. The effectiveness of this approach is illustrated with simulations on DAMADICS benchmark.

  6. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    Science.gov (United States)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  7. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  8. Kurtosis based weighted sparse model with convex optimization technique for bearing fault diagnosis

    Science.gov (United States)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yan, Ruqiang

    2016-12-01

    The bearing failure, generating harmful vibrations, is one of the most frequent reasons for machine breakdowns. Thus, performing bearing fault diagnosis is an essential procedure to improve the reliability of the mechanical system and reduce its operating expenses. Most of the previous studies focused on rolling bearing fault diagnosis could be categorized into two main families, kurtosis-based filter method and wavelet-based shrinkage method. Although tremendous progresses have been made, their effectiveness suffers from three potential drawbacks: firstly, fault information is often decomposed into proximal frequency bands and results in impulsive feature frequency band splitting (IFFBS) phenomenon, which significantly degrades the performance of capturing the optimal information band; secondly, noise energy spreads throughout all frequency bins and contaminates fault information in the information band, especially under the heavy noisy circumstance; thirdly, wavelet coefficients are shrunk equally to satisfy the sparsity constraints and most of the feature information energy are thus eliminated unreasonably. Therefore, exploiting two pieces of prior information (i.e., one is that the coefficient sequences of fault information in the wavelet basis is sparse, and the other is that the kurtosis of the envelope spectrum could evaluate accurately the information capacity of rolling bearing faults), a novel weighted sparse model and its corresponding framework for bearing fault diagnosis is proposed in this paper, coined KurWSD. KurWSD formulates the prior information into weighted sparse regularization terms and then obtains a nonsmooth convex optimization problem. The alternating direction method of multipliers (ADMM) is sequentially employed to solve this problem and the fault information is extracted through the estimated wavelet coefficients. Compared with state-of-the-art methods, KurWSD overcomes the three drawbacks and utilizes the advantages of both family

  9. A fault diagnosis methodology for rolling element bearings based on advanced signal pretreatment and autoregressive modelling

    Science.gov (United States)

    Al-Bugharbee, Hussein; Trendafilova, Irina

    2016-05-01

    This study proposes a methodology for rolling element bearings fault diagnosis which gives a complete and highly accurate identification of the faults present. It has two main stages: signals pretreatment, which is based on several signal analysis procedures, and diagnosis, which uses a pattern-recognition process. The first stage is principally based on linear time invariant autoregressive modelling. One of the main contributions of this investigation is the development of a pretreatment signal analysis procedure which subjects the signal to noise cleaning by singular spectrum analysis and then stationarisation by differencing. So the signal is transformed to bring it close to a stationary one, rather than complicating the model to bring it closer to the signal. This type of pretreatment allows the use of a linear time invariant autoregressive model and improves its performance when the original signals are non-stationary. This contribution is at the heart of the proposed method, and the high accuracy of the diagnosis is a result of this procedure. The methodology emphasises the importance of preliminary noise cleaning and stationarisation. And it demonstrates that the information needed for fault identification is contained in the stationary part of the measured signal. The methodology is further validated using three different experimental setups, demonstrating very high accuracy for all of the applications. It is able to correctly classify nearly 100 percent of the faults with regard to their type and size. This high accuracy is the other important contribution of this methodology. Thus, this research suggests a highly accurate methodology for rolling element bearing fault diagnosis which is based on relatively simple procedures. This is also an advantage, as the simplicity of the individual processes ensures easy application and the possibility for automation of the entire process.

  10. Fuzzy inferencing to identify degree of interaction in the development of fault prediction models

    Directory of Open Access Journals (Sweden)

    Rinkaj Goyal

    2017-01-01

    One related objective is the identification of influential metrics in the development of fault prediction models. A fuzzy rule intrinsically represents a form of interaction between fuzzified inputs. Analysis of these rules establishes that Low and NOT (High level of inheritance based metrics significantly contributes to the F-measure estimate of the model. Further, the Lack of Cohesion of Methods (LCOM metric was found insignificant in this empirical study.

  11. Nonlinear dynamic modeling of a helicopter planetary gear train for carrier plate crack fault diagnosis

    OpenAIRE

    Fan Lei; Wang Shaoping; Wang Xingjian; Han Feng; Lyu Huawei

    2016-01-01

    Planetary gear train plays a significant role in a helicopter operation and its health is of great importance for the flight safety of the helicopter. This paper investigates the effects of a planet carrier plate crack on the dynamic characteristics of a planetary gear train, and thus finds an effective method to diagnose crack fault. A dynamic model is developed to analyze the torsional vibration of a planetary gear train with a cracked planet carrier plate. The model takes into consideratio...

  12. An Empirical Investigation of Predicting Fault Count, Fix Cost and Effort Using Software Metrics

    Directory of Open Access Journals (Sweden)

    Raed Shatnawi

    2016-02-01

    Full Text Available Software fault prediction is important in software engineering field. Fault prediction helps engineers manage their efforts by identifying the most complex parts of the software where errors concentrate. Researchers usually study the fault-proneness in modules because most modules have zero faults, and a minority have the most faults in a system. In this study, we present methods and models for the prediction of fault-count, fault-fix cost, and fault-fix effort and compare the effectiveness of different prediction models. This research proposes using a set of procedural metrics to predict three fault measures: fault count, fix cost and fix effort. Five regression models are used to predict the three fault measures. The study reports on three data sets published by NASA. The models for each fault are evaluated using the Root Mean Square Error. A comparison amongst fault measures is conducted using the Relative Absolute Error. The models show promising results to provide a practical guide to help software engineers in allocating resources during software testing and maintenance. The cost fix models show equal or better performance than fault count and effort models.

  13. Dynamic system identification and model-based fault diagnosis of an industrial gas turbine prototype

    Energy Technology Data Exchange (ETDEWEB)

    Simani, S. [Universita di Ferrara (Italy). Dipartimento di Ingegneria; Fantuzzi, C. [Universita di Modena e Reggio Emilia (Italy). Dipartimento di Scienze e Metodi per l' Ingegneria

    2006-07-15

    In this paper, a model-based procedure exploiting analytical redundancy for the detection and isolation of faults on a gas turbine process is presented. The main point of the present work consists of exploiting system identification schemes in connection with observer and filter design procedures for diagnostic purpose. Linear model identification (black-box modelling) and output estimation (dynamic observers and Kalman filters) integrated approaches to fault diagnosis are in particular advantageous in terms of solution complexity and performance. This scheme is especially useful when robust solutions are considered for minimising the effects of modelling errors and noise, while maximising fault sensitivity. A model of the process under investigation is obtained by identification procedures, whilst the residual generation task is achieved by means of output observers and Kalman filters designed in both noise-free and noisy assumptions. The proposed tools have been tested on a single-shaft industrial gas turbine prototype model and they have been evaluated using non-linear simulations, based on the gas turbine data. (author)

  14. Phenomenological models of vibration signals for condition monitoring and fault diagnosis of epicyclic gearboxes

    Science.gov (United States)

    Lei, Yaguo; Liu, Zongyao; Lin, Jing; Lu, Fanbo

    2016-05-01

    Condition monitoring and fault diagnosis of epicyclic gearboxes using vibration signals are not as straightforward as that of fixed-axis gearboxes since epicyclic gearboxes behave quite differently from fixed-axis gearboxes in many aspects, like spectral structures. Aiming to present the spectral structures of vibration signals of epicyclic gearboxes, phenomenological models of vibration signals of epicyclic gearboxes are developed by algebraic equations and spectral structures of these models are deduced using Fourier series analysis. In the phenomenological models, all the possible vibration transfer paths from gear meshing points to a fixed transducer and the effects of angular shifts of planet gears on the spectral structures are considered. Accordingly, time-varying vibration transfer paths from sun-planet/ring-planet gear meshing points to the fixed transducer due to carrier rotation are given by window functions with different amplitudes. And an angular shift in one planet gear position is introduced in the process of modeling. After the theoretical derivations, three experiments are conducted on an epicyclic gearbox test rig and the spectral structures of collected vibration signals are analyzed. As a result, the effects of angular shifts of planet gears are verified, and the phenomenological models of vibration signals when a local fault occurs on the sun gear and the planet gear are validated, respectively. The experiment results demonstrate that the established phenomenological models in this paper are helpful to the condition monitoring and fault diagnosis of epicyclic gearboxes.

  15. Predictive Modeling of a Two-Stage Gearbox towards Fault Detection

    Directory of Open Access Journals (Sweden)

    Edward J. Diehl

    2016-01-01

    Full Text Available This paper presents a systematic approach to the modeling and analysis of a benchmark two-stage gearbox test bed to characterize gear fault signatures when processed with harmonic wavelet transform (HWT analysis. The eventual goal of condition monitoring is to be able to interpret vibration signals from nonstationary machinery in order to identify the type and severity of gear damage. To advance towards this goal, a lumped-parameter model that can be analyzed efficiently is developed which characterizes the gearbox vibratory response at the system level. The model parameters are identified through correlated numerical and experimental investigations. The model fidelity is validated first by spectrum analysis, using constant speed experimental data, and secondly by HWT analysis, using nonstationary experimental data. Model prediction and experimental data are compared for healthy gear operation and a seeded fault gear with a missing tooth. The comparison confirms that both the frequency content and the predicted, relative response magnitudes match with physical measurements. The research demonstrates that the modeling method in combination with the HWT data analysis has the potential for facilitating successful fault detection and diagnosis for gearbox systems.

  16. High level organizing principles for display of systems fault information for commercial flight crews

    Science.gov (United States)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  17. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  18. 3D Fault modeling of the active Chittagong-Myanmar fold belt, Bangladesh

    Science.gov (United States)

    Peterson, D. E.; Hubbard, J.; Akhter, S. H.; Shamim, N.

    2013-12-01

    The Chittagong-Myanmar fold belt (CMFB), located in eastern Bangladesh, eastern India and western Myanmar, accommodates east-west shortening at the India-Burma plate boundary. Oblique subduction of the Indian Plate beneath the Burma Plate since the Eocene has led to the development of a large accretionary prism complex, creating a series of north-south trending folds. A continuous sediment record from ~55 Ma to the present has been deposited in the Bengal Basin by the Ganges-Brahmaputra-Meghna rivers, providing an opportunity to learn about the history of tectonic deformation and activity in this fold-and-thrust belt. Surface mapping indicates that the fold-and-thrust belt is characterized by extensive N-S-trending anticlines and synclines in a belt ~150-200 km wide. Seismic reflection profiles from the Chittagong and Chittagong Hill Tracts, Bangladesh, indicate that the anticlines mapped at the surface narrow with depth and extend to ~3.0 seconds TWTT (two-way travel time), or ~6.0 km. The folds of Chittagong and Chittagong Hill Tracts are characterized by doubly plunging box-shaped en-echelon anticlines separated by wide synclines. The seismic data suggest that some of these anticlines are cored by thrust fault ramps that extend to a large-scale décollement that dips gently to the east. Other anticlines may be the result of detachment folding from the same décollement. The décollement likely deepens to the east and intersects with the northerly-trending, oblique-slip Kaladan fault. The CMFB region is bounded to the north by the north-dipping Dauki fault and the Shillong Plateau. The tectonic transition from a wide band of E-W shortening in the south to a narrow zone of N-S shortening along the Dauki fault is poorly understood. We integrate surface and subsurface datasets, including topography, geological maps, seismicity, and industry seismic reflection profiles, into a 3D modeling environment and construct initial 3D surfaces of the major faults in this

  19. Laboratory measurements of the relative permeability of cataclastic fault rocks: An important consideration for production simulation modelling

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hinai, Suleiman; Fisher, Quentin J. [School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom); Al-Busafi, Bader [Petroleum Development of Oman, MAF, Sultanate of Oman, Muscat (Oman); Guise, Phillip; Grattoni, Carlos A. [Rock Deformation Research Limited, School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2008-06-15

    It is becoming increasingly common practice to model the impact of faults on fluid flow within petroleum reservoirs by applying transmissibility multipliers, calculated from the single-phase permeability of fault rocks, to the grid-blocks adjacent to faults in production simulations. The multi-phase flow properties (e.g. relative permeability and capillary pressure) of fault rocks are not considered because special core analysis has never previously been conducted on fault rock samples. Here, we partially fill this knowledge gap by presenting data from the first experiments that have measured the gas relative permeability (k{sub rg}) of cataclastic fault rocks. The cataclastic faults were collected from an outcrop of Permo-Triassic sandstone in the Moray Firth, Scotland; the fault rocks are similar to those found within Rotliegend gas reservoirs in the UK southern North Sea. The relative permeability measurements were made using a gas pulse-decay technique on samples whose water saturation was varied using vapour chambers. The measurements indicate that if the same fault rocks were present in gas reservoirs from the southern Permian Basin they would have k{sub rg} values of <0.02. Failure to take into account relative permeability effects could therefore lead to an overestimation of the transmissibility of faults within gas reservoirs by several orders of magnitude. Incorporation of these new results into a simplified production simulation model can explain the pressure evolution from a compartmentalised Rotliegend gas reservoir from the southern North Sea, offshore Netherlands, which could not easily be explained using only single-phase permeability data from fault rocks. (author)

  20. Synthetic modeling of a fluid injection-induced fault rupture with slip-rate dependent friction coefficient

    Science.gov (United States)

    Urpi, Luca; Rinaldi, Antonio Pio; Rutqvist, Jonny; Cappa, Frédéric; Spiers, Christopher J.

    2016-04-01

    Poro-elastic stress and effective stress reduction associated with deep underground fluid injection can potentially trigger shear rupture along pre-existing faults. We modeled an idealized CO2 injection scenario, to assess the effects on faults of the first phase of a generic CO2 aquifer storage operation. We used coupled multiphase fluid flow and geomechanical numerical modeling to evaluate the stress and pressure perturbations induced by fluid injection and the response of a nearby normal fault. Slip-rate dependent friction and inertial effects have been aken into account during rupture. Contact elements have been used to take into account the frictional behavior of the rupture plane. We investigated different scenarios of injection rate to induce rupture on the fault, employing various fault rheologies. Published laboratory data on CO2-saturated intact and crushed rock samples, representative of a potential target aquifer, sealing formation and fault gouge, have been used to define a scenario where different fault rheologies apply at different depths. Nucleation of fault rupture takes place at the bottom of the reservoir, in agreement with analytical poro-elastic stress calculations, considering injection-induced reservoir inflation and the tectonic scenario. For the stress state here considered, the first triggered rupture always produces the largest rupture length and slip magnitude, correlated with the fault rheology. Velocity weakening produces larger ruptures and generates larger magnitude seismic events. Heterogeneous faults have been considered including velocity-weakening or velocity strengthening sections inside and below the aquifer, while upper sections being velocity-neutral. Nucleation of rupture in a velocity strengthening section results in a limited rupture extension, both in terms of maximum slip and rupture length. For a heterogeneous fault with nucleation in a velocity-weakening section, the rupture may propagate into the overlying velocity

  1. Bounding Ground Motions for Hayward Fault Scenario Earthquakes Using Suites of Stochastic Rupture Models

    Science.gov (United States)

    Rodgers, A. J.; Xie, X.; Petersson, A.

    2007-12-01

    The next major earthquake in the San Francisco Bay area is likely to occur on the Hayward-Rodgers Creek Fault system. Attention on the southern Hayward section is appropriate given the upcoming 140th anniversary of the 1868 M 7 rupture coinciding with the estimated recurrence interval. This presentation will describe ground motion simulations for large (M > 6.5) earthquakes on the Hayward Fault using a recently developed elastic finite difference code and high-performance computers at Lawrence Livermore National Laboratory. Our code easily reads the recent USGS 3D seismic velocity model of the Bay Area developed in 2005 and used for simulations of the 1906 San Francisco and 1989 Loma Prieta earthquakes. Previous work has shown that the USGS model performs very well when used to model intermediate period (4-33 seconds) ground motions from moderate (M ~ 4-5) earthquakes (Rodgers et al., 2008). Ground motions for large earthquakes are strongly controlled by the hypocenter location, spatial distribution of slip, rise time and directivity effects. These are factors that are impossible to predict in advance of a large earthquake and lead to large epistemic uncertainties in ground motion estimates for scenario earthquakes. To bound this uncertainty, we are performing suites of simulations of scenario events on the Hayward Fault using stochastic rupture models following the method of Liu et al. (Bull. Seism. Soc. Am., 96, 2118-2130, 2006). These rupture models have spatially variable slip, rupture velocity, rise time and rake constrained by characterization of inferred finite fault ruptures and expert opinion. Computed ground motions show variability due to the variability in rupture models and can be used to estimate the average and spread of ground motion measures at any particular site. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No.W-7405-Eng-48. This is

  2. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed....... The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool the MFM Suite. MFM applications in nuclear power systems are described by two examples a PWR and a FBRreactor. The PWR example show how MFM can be used to model and reason about...

  3. An Analytical Model for Assessing Stability of Pre-Existing Faults in Caprock Caused by Fluid Injection and Extraction in a Reservoir

    Science.gov (United States)

    Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin

    2016-07-01

    Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.

  4. Sensor integritY Management and Prognostics Technology with On-line fault Mitigation (SYMPTOM) for Improved Flight Safety of Commercial Aircraft Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test the Sensor integritY Management and Prognostics Technology with On-line fault Mitigation (SYMPTOM) system. The SYMPTOM assures...

  5. Model-Based Fault Detection and Isolation of a Liquid-Cooled Frequency Converter on a Wind Turbine

    Directory of Open Access Journals (Sweden)

    Peng Li

    2012-01-01

    Full Text Available With the rapid development of wind energy technologies and growth of installed wind turbine capacity in the world, the reliability of the wind turbine becomes an important issue for wind turbine manufactures, owners, and operators. The reliability of the wind turbine can be improved by implementing advanced fault detection and isolation schemes. In this paper, an observer-based fault detection and isolation method for the cooling system in a liquid-cooled frequency converter on a wind turbine which is built up in a scalar version in the laboratory is presented. A dynamic model of the scale cooling system is derived based on energy balance equation. A fault analysis is conducted to determine the severity and occurrence rate of possible component faults and their end effects in the cooling system. A method using unknown input observer is developed in order to detect and isolate the faults based on the developed dynamical model. The designed fault detection and isolation algorithm is applied on a set of measured experiment data in which different faults are artificially introduced to the scaled cooling system. The experimental results conclude that the different faults are successfully detected and isolated.

  6. Modeling of stress-triggered faulting and displacement magnitude along Agenor Linea, Europa

    Science.gov (United States)

    Nahm, A.; Cameron, M. E.; Smith-Konter, B. R.; Pappalardo, R. T.

    2012-12-01

    We investigate the relationship between shear and normal stresses at Agenor Linea (AL) to better understand the role of tidal stress sources and implications for faulting on Europa. AL is a ~1500 km long, E-W trending, 20-30 km wide zone of geologically young deformation located in the southern hemisphere, and it forks into two branches at its eastern end. Based on photogeological evidence and stress orientation predictions, AL is primarily a right-lateral strike slip fault and may have accommodated up to 20 km of right-lateral slip. We compute tidal shear and normal stresses along present-day AL using SatStress, a numerical code that calculates tidal stresses at any point on the surface of a satellite for both diurnal and non-synchronous rotation (NSR) stresses. We adopt model parameters appropriate for Europa with a spherically symmetric, 20 km thick ice shell underlain by a global subsurface ocean and assume a coefficient of friction μ = 0.6. Along AL, shear stresses are primarily right-lateral (~1.8 MPa), while normal stresses are predominantly compressive along the west side of the structure (~0.7 MPa) and tensile along the east side (~2.9 MPa). Failure along AL is assessed using the Coulomb failure criterion, which states that shear failure occurs when the shear stress exceeds the frictional resistance of the fault. Where fault segments meet these conditions for shear failure, coseismic displacements are determined (assuming complete stress drop). We calculate shallow displacements as large as ~50 m at 1 km depth and ~10 m at 3 km depth. Triggered stresses from coseismic fault slip may also contribute to the total slip. We investigate the role of stress triggering by computing the change in Coulomb failure stress (ΔCFS) along AL. Where slip has occurred, negative ΔCFS is calculated; positive ΔCFS values indicate segments where failure is promoted. Positive ΔCFS is calculated at the western tip and the intersection of the branches with the main fault at a

  7. Fault tolerant operation of switched reluctance machine

    Science.gov (United States)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  8. A Ship Propulsion System Model for Fault-tolerant Control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    . The propulsion system model is presented in two versions: the first one consists of one engine and one propeller, and the othe one consists of two engines and their corresponding propellers placed in parallel in the ship. The corresponding programs are developed and are available....

  9. Implementation of a Fractional Model-Based Fault Detection Algorithm into a PLC Controller

    Science.gov (United States)

    Kopka, Ryszard

    2014-12-01

    This paper presents results related to the implementation of model-based fault detection and diagnosis procedures into a typical PLC controller. To construct the mathematical model and to implement the PID regulator, a non-integer order differential/integral calculation was used. Such an approach allows for more exact control of the process and more precise modelling. This is very crucial in model-based diagnostic methods. The theoretical results were verified on a real object in the form of a supercapacitor connected to a PLC controller by a dedicated electronic circuit controlled directly from the PLC outputs.

  10. The study of hybrid model identification,computation analysis and fault location for nonlinear dynamic circuits and systems

    Institute of Scientific and Technical Information of China (English)

    XIE Hong; HE Yi-gang; ZENG Guan-da

    2006-01-01

    This paper presents the hybrid model identification for a class of nonlinear circuits and systems via a combination of the block-pulse function transform with the Volterra series.After discussing the method to establish the hybrid model and introducing the hybrid model identification,a set of relative formulas are derived for calculating the hybrid model and computing the Volterra series solution of nonlinear dynamic circuits and systems.In order to significantly reduce the computation cost for fault location,the paper presents a new fault diagnosis method based on multiple preset models that can be realized online.An example of identification simulation and fault diagnosis are given.Results show that the method has high accuracy and efficiency for fault location of nonlinear dynamic circuits and systems.

  11. Knowledge Model: Project Knowledge Management

    DEFF Research Database (Denmark)

    Durao, Frederico; Dolog, Peter; Grolin, Daniel

    2009-01-01

    The Knowledge model for project management serves several goals:Introducing relevant concepts of project management area for software development (Section 1). Reviewing and understanding the real case requirements from the industrial perspective. (Section 2). Giving some preliminary suggestions f...

  12. Knowledge Model: Project Knowledge Management

    DEFF Research Database (Denmark)

    Durao, Frederico; Dolog, Peter; Grolin, Daniel

    2009-01-01

    The Knowledge model for project management serves several goals:Introducing relevant concepts of project management area for software development (Section 1). Reviewing and understanding the real case requirements from the industrial perspective. (Section 2). Giving some preliminary suggestions...

  13. Model-Based Fault Detection and Isolation of a Liquid-Cooled Frequency Converter on a Wind Turbine

    DEFF Research Database (Denmark)

    Li, Peng; Odgaard, Peter Fogh; Stoustrup, Jakob

    2012-01-01

    system is derived based on energy balance equation. A fault analysis is conducted to determine the severity and occurrence rate of possible component faults and their end effects in the cooling system. A method using unknown input observer is developed in order to detect and isolate the faults based......With the rapid development of wind energy technologies and growth of installed wind turbine capacity in the world, the reliability of the wind turbine becomes an important issue for wind turbine manufactures, owners, and operators. The reliability of the wind turbine can be improved by implementing...... advanced fault detection and isolation schemes. In this paper, an observer-based fault detection and isolation method for the cooling system in a liquid-cooled frequency converter on a wind turbine which is built up in a scalar version in the laboratory is presented. A dynamic model of the scale cooling...

  14. Fault diagnostics in power transformer model winding for different alpha values

    Directory of Open Access Journals (Sweden)

    G.H. Kusumadevi

    2015-09-01

    Full Text Available Transient overvoltages appearing at line terminal of power transformer HV windings can cause failure of winding insulation. The failure can be from winding to ground or between turns or sections of winding. In most of the cases, failure from winding to ground can be detected by changes in the wave shape of surge voltage appearing at line terminal. However, detection of insulation failure between turns may be difficult due to intricacies involved in identifications of faults. In this paper, simulation investigations carried out on a power transformer model winding for identifying faults between turns of winding has been reported. The power transformer HV winding has been represented by 8 sections, 16 sections and 24 sections. Neutral current waveform has been analyzed for same model winding represented by different number of sections. The values of α (‘α’ value is the square root of total ground capacitance to total series capacitance of winding considered for windings are 5, 10 and 20. Standard lightning impulse voltage (1.2/50 μs wave shape have been considered for analysis. Computer simulations have been carried out using software PSPICE version 10.0. Neutral current and frequency response analysis methods have been used for identification of faults within sections of transformer model winding.

  15. An Integrated Approach of Model checking and Temporal Fault Tree for System Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2009-10-15

    Digitalization of instruments and control systems in nuclear power plants offers the potential to improve plant safety and reliability through features such as increased hardware reliability and stability, and improved failure detection capability. It however makes the systems and their safety analysis more complex. Originally, safety analysis was applied to hardware system components and formal methods mainly to software. For software-controlled or digitalized systems, it is necessary to integrate both. Fault tree analysis (FTA) which has been one of the most widely used safety analysis technique in nuclear industry suffers from several drawbacks as described in. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA.

  16. A Hamiltonian Approach to Fault Isolation in a Planar Vertical Take–Off and Landing Aircraft Model

    Directory of Open Access Journals (Sweden)

    Rodriguez-Alfaro Luis H.

    2015-03-01

    Full Text Available The problem of fault detection and isolation in a class of nonlinear systems having a Hamiltonian representation is considered. In particular, a model of a planar vertical take-off and landing aircraft with sensor and actuator faults is studied. A Hamiltonian representation is derived from an Euler-Lagrange representation of the system model considered. In this form, nonlinear decoupling is applied in order to obtain subsystems with (as much as possible specific fault sensitivity properties. The resulting decoupled subsystem is represented as a Hamiltonian system and observer-based residual generators are designed. The results are presented through simulations to show the effectiveness of the proposed approach.

  17. Using the 3D active fault model to estimate the surface deformation, a study on HsinChu area, Taiwan.

    Science.gov (United States)

    Lin, Y. K.; Ke, M. C.; Ke, S. S.

    2016-12-01

    An active fault is commonly considered to be active if they have moved one or more times in the last 10,000 years and likely to have another earthquake sometime in the future. The relationship between the fault reactivation and the surface deformation after the Chi-Chi earthquake (M=7.2) in 1999 has been concerned up to now. According to the investigations of well-known disastrous earthquakes in recent years, indicated that surface deformation is controlled by the 3D fault geometric shape. Because the surface deformation may cause dangerous damage to critical infrastructures, buildings, roads, power, water and gas lines etc. Therefore it's very important to make pre-disaster risk assessment via the 3D active fault model to decrease serious economic losses, people injuries and deaths caused by large earthquake. The approaches to build up the 3D active fault model can be categorized as (1) field investigation (2) digitized profile data and (3) build the 3D modeling. In this research, we tracked the location of the fault scarp in the field first, then combined the seismic profiles (had been balanced) and historical earthquake data to build the underground fault plane model by using SKUA-GOCAD program. Finally compared the results come from trishear model (written by Richard W. Allmendinger, 2012) and PFC-3D program (Itasca) and got the calculated range of the deformation area. By analysis of the surface deformation area made from Hsin-Chu Fault, we concluded the result the damage zone is approaching 68 286m, the magnitude is 6.43, the offset is 0.6m. base on that to estimate the population casualties, building damage by the M=6.43 earthquake in Hsin-Chu area, Taiwan. In the future, in order to be applied accurately on earthquake disaster prevention, we need to consider further the groundwater effect and the soil structure interaction inducing by faulting.

  18. The Derivation of Fault Volumetric Properties from 3D Trace Maps Using Outcrop Constrained Discrete Fracture Network Models

    Science.gov (United States)

    Hodgetts, David; Seers, Thomas

    2015-04-01

    Fault systems are important structural elements within many petroleum reservoirs, acting as potential conduits, baffles or barriers to hydrocarbon migration. Large, seismic-scale faults often serve as reservoir bounding seals, forming structural traps which have proved to be prolific plays in many petroleum provinces. Though inconspicuous within most seismic datasets, smaller subsidiary faults, commonly within the damage zones of parent structures, may also play an important role. These smaller faults typically form narrow, tabular low permeability zones which serve to compartmentalize the reservoir, negatively impacting upon hydrocarbon recovery. Though considerable improvements have been made in the visualization field to reservoir-scale fault systems with the advent of 3D seismic surveys, the occlusion of smaller scale faults in such datasets is a source of significant uncertainty during prospect evaluation. The limited capacity of conventional subsurface datasets to probe the spatial distribution of these smaller scale faults has given rise to a large number of outcrop based studies, allowing their intensity, connectivity and size distributions to be explored in detail. Whilst these studies have yielded an improved theoretical understanding of the style and distribution of sub-seismic scale faults, the ability to transform observations from outcrop to quantities that are relatable to reservoir volumes remains elusive. These issues arise from the fact that outcrops essentially offer a pseudo-3D window into the rock volume, making the extrapolation of surficial fault properties such as areal density (fracture length per unit area: P21), to equivalent volumetric measures (i.e. fracture area per unit volume: P32) applicable to fracture modelling extremely challenging. Here, we demonstrate an approach which harnesses advances in the extraction of 3D trace maps from surface reconstructions using calibrated image sequences, in combination with a novel semi

  19. Modeling the evolution of the lower crust with laboratory derived rheological laws under an intraplate strike slip fault

    Science.gov (United States)

    Zhang, X.; Sagiya, T.

    2015-12-01

    The earth's crust can be divided into the brittle upper crust and the ductile lower crust based on the deformation mechanism. Observations shows heterogeneities in the lower crust are associated with fault zones. One of the candidate mechanisms of strain concentration is shear heating in the lower crust, which is considered by theoretical studies for interplate faults [e.g. Thatcher & England 1998, Takeuchi & Fialko 2012]. On the other hand, almost no studies has been done for intraplate faults, which are generally much immature than interplate faults and characterized by their finite lengths and slow displacement rates. To understand the structural characteristics in the lower crust and its temporal evolution in a geological time scale, we conduct a 2-D numerical experiment on the intraplate strike slip fault. The lower crust is modeled as a 20km thick viscous layer overlain by rigid upper crust that has a steady relative motion across a vertical strike slip fault. Strain rate in the lower crust is assumed to be a sum of dislocation creep and diffusion creep components, each of which flows the experimental flow laws. The geothermal gradient is assumed to be 25K/km. We have tested different total velocity on the model. For intraplate fault, the total velocity is less than 1mm/yr, and for comparison, we use 30mm/yr for interplate faults. Results show that at a low slip rate condition, dislocation creep dominates in the shear zone near the intraplate fault's deeper extension while diffusion creep dominates outside the shear zone. This result is different from the case of interplate faults, where dislocation creep dominates the whole region. Because of the power law effect of dislocation creep, the effective viscosity in the shear zone under intraplate faults is much higher than that under the interplate fault, therefore, shear zone under intraplate faults will have a much higher viscosity and lower shear stress than the intraplate fault. Viscosity contract between

  20. Nucleation process of magnitude 2 repeating earthquakes on the San Andreas Fault predicted by rate-and-state fault models with SAFOD drill core data

    Science.gov (United States)

    Kaneko, Yoshihiro; Carpenter, Brett M.; Nielsen, Stefan B.

    2017-01-01

    Recent laboratory shear-slip experiments conducted on a nominally flat frictional interface reported the intriguing details of a two-phase nucleation of stick-slip motion that precedes the dynamic rupture propagation. This behavior was subsequently reproduced by a physics-based model incorporating laboratory-derived rate-and-state friction laws. However, applying the laboratory and theoretical results to the nucleation of crustal earthquakes remains challenging due to poorly constrained physical and friction properties of fault zone rocks at seismogenic depths. Here we apply the same physics-based model to simulate the nucleation process of crustal earthquakes using unique data acquired during the San Andreas Fault Observatory at Depth (SAFOD) experiment and new and existing measurements of friction properties of SAFOD drill core samples. Using this well-constrained model, we predict what the nucleation phase will look like for magnitude ˜2 repeating earthquakes on segments of the San Andreas Fault at a 2.8 km depth. We find that despite up to 3 orders of magnitude difference in the physical and friction parameters and stress conditions, the behavior of the modeled nucleation is qualitatively similar to that of laboratory earthquakes, with the nucleation consisting of two distinct phases. Our results further suggest that precursory slow slip associated with the earthquake nucleation phase may be observable in the hours before the occurrence of the magnitude ˜2 earthquakes by strain measurements close (a few hundred meters) to the hypocenter, in a position reached by the existing borehole.

  1. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  2. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  3. Data-driven technology for engineering systems health management design approach, feature construction, fault diagnosis, prognosis, fusion and decisions

    CERN Document Server

    Niu, Gang

    2017-01-01

    This book introduces condition-based maintenance (CBM)/data-driven prognostics and health management (PHM) in detail, first explaining the PHM design approach from a systems engineering perspective, then summarizing and elaborating on the data-driven methodology for feature construction, as well as feature-based fault diagnosis and prognosis. The book includes a wealth of illustrations and tables to help explain the algorithms, as well as practical examples showing how to use this tool to solve situations for which analytic solutions are poorly suited. It equips readers to apply the concepts discussed in order to analyze and solve a variety of problems in PHM system design, feature construction, fault diagnosis and prognosis.

  4. Two methods for modeling vibrations of planetary gearboxes including faults: Comparison and validation

    Science.gov (United States)

    Parra, J.; Vicuña, Cristián Molina

    2017-08-01

    Planetary gearboxes are important components of many industrial applications. Vibration analysis can increase their lifetime and prevent expensive repair and safety concerns. However, an effective analysis is only possible if the vibration features of planetary gearboxes are properly understood. In this paper, models are used to study the frequency content of planetary gearbox vibrations under non-fault and different fault conditions. Two different models are considered: phenomenological model, which is an analytical-mathematical formulation based on observation, and lumped-parameter model, which is based on the solution of the equations of motion of the system. Results of both models are not directly comparable, because the phenomenological model provides the vibration on a fixed radial direction, such as the measurements of the vibration sensor mounted on the outer part of the ring gear. On the other hand, the lumped-parameter model provides the vibrations on the basis of a rotating reference frame fixed to the carrier. To overcome this situation, a function to decompose the lumped-parameter model solutions to a fixed reference frame is presented. Finally, comparisons of results from both model perspectives and experimental measurements are presented.

  5. A Parallel Decision Model Based on Support Vector Machines and Its Application to Fault Diagnosis

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu(阎威武); Shao Huihe

    2004-01-01

    Many industrial process systems are becoming more and more complex and are characterized by distributed features. To ensure such a system to operate under working order, distributed parameter values are often inspected from subsystems or different points in order to judge working conditions of the system and make global decisions. In this paper, a parallel decision model based on Support Vector Machine (PDMSVM) is introduced and applied to the distributed fault diagnosis in industrial process. PDMSVM is convenient for information fusion of distributed system and it performs well in fault diagnosis with distributed features. PDMSVM makes decision based on synthetic information of subsystems and takes the advantage of Support Vector Machine. Therefore decisions made by PDMSVM are highly reliable and accurate.

  6. FAULT DIAGNOSIS APPROACH BASED ON HIDDEN MARKOV MODEL AND SUPPORT VECTOR MACHINE

    Institute of Scientific and Technical Information of China (English)

    LIU Guanjun; LIU Xinmin; QIU Jing; HU Niaoqing

    2007-01-01

    Aiming at solving the problems of machine-learning in fault diagnosis, a diagnosis approach is proposed based on hidden Markov model (HMM) and support vector machine (SVM). HMM usually describes intra-class measure well and is good at dealing with continuous dynamic signals. SVM expresses inter-class difference effectively and has perfect classify ability. This approach is built on the merit of HMM and SVM. Then, the experiment is made in the transmission system of a helicopter. With the features extracted from vibration signals in gearbox, this HMM-SVM based diagnostic approach is trained and used to monitor and diagnose the gearbox's faults. The result shows that this method is better than HMM-based and SVM-based diagnosing methods in higher diagnostic accuracy with small training samples.

  7. Model based, detailed fault analysis in the CERN PS complex equipment

    CERN Document Server

    Beharrell, M; Bouché, J M; Cupérus, J; Lelaizant, M; Mérard, L

    1995-01-01

    In the CERN PS Complex of accelerators, about a thousand of equipment of various type (power converters, RF cavities, beam measurement devices, vacuum systems etc...) are controlled using the so-called Control Protocol, already described in previous Conferences. This Protocol, a model based equipment access standard, provides, amongst other facilities, a uniform and structured fault description and report feature. The faults are organized in categories, following their gravity, and are presented at two levels: the first level is global and identical for all devices, the second level is very detailed and adapted to the peculiarities of each single device. All the relevant information is provided by the equipment specialists and is appropriately stored in static and real time data bases; in this way a unique set of data driven application programs can always cope with existing and newly added equipment. Two classes of applications have been implemented, the first one is intended for control room alarm purposes,...

  8. Feature Extraction Method of Rolling Bearing Fault Signal Based on EEMD and Cloud Model Characteristic Entropy

    Directory of Open Access Journals (Sweden)

    Long Han

    2015-09-01

    Full Text Available The randomness and fuzziness that exist in rolling bearings when faults occur result in uncertainty in acquisition signals and reduce the accuracy of signal feature extraction. To solve this problem, this study proposes a new method in which cloud model characteristic entropy (CMCE is set as the signal characteristic eigenvalue. This approach can overcome the disadvantages of traditional entropy complexity in parameter selection when solving uncertainty problems. First, the acoustic emission signals under normal and damage rolling bearing states collected from the experiments are decomposed via ensemble empirical mode decomposition. The mutual information method is then used to select the sensitive intrinsic mode functions that can reflect signal characteristics to reconstruct the signal and eliminate noise interference. Subsequently, CMCE is set as the eigenvalue of the reconstructed signal. Finally, through the comparison of experiments between sample entropy, root mean square and CMCE, the results show that CMCE can better represent the characteristic information of the fault signal.

  9. A physical model for aftershocks triggered by dislocation on a rectangular fault

    CERN Document Server

    Console, R

    2005-01-01

    We find the static displacement, stress, strain and the modified Columb failure stress produced in an elastic medium by a finite size rectangular fault after its dislocation with uniform stress drop but a non uniform dislocation on the source. The time-dependent rate of triggered earthquakes is estimated by a rate-state model applied to a uniformly distributed population of faults whose equilibrium is perturbated by a stress change caused only by the first dislocation. The rate of triggered events in our simulations is exponentially proportional to the stress change, but the time at which the maximum rate begins to decrease is variable from fractions of hour for positive stress changes of the order of some MPa, up to more than a year for smaller stress changes. As a consequence, the final number of triggered events is proportional to the stress change. The model predicts that the total number of events triggered on a plane containing the fault is proportional to the 2/3 power of the seismic moment. Indeed, th...

  10. Fault Tree Model for Failure Path Prediction of Bolted Steel Tension Member in a Structural System

    Directory of Open Access Journals (Sweden)

    Biswajit Som

    2015-06-01

    Full Text Available Fault tree is a graphical representation of various sequential combinations of events which leads to the failure of any system, such as a structural system. In this paper it is shown that a fault tree model is also applicable to a critical element of a complex structural system. This will help to identify the different failure mode of a particular structural element which might eventually triggered a progressive collapse of the whole structural system. Non-redundant tension member generally regarded as a Fracture Critical Member (FCM in a complex structural system, especially in bridge, failure of which may lead to immediate collapse of the structure. Limit state design is governed by the failure behavior of a structural element at its ultimate state. Globally, condition assessment of an existing structural system, particularly for bridges, Fracture Critical Inspection becomes very effective and mandatory in some countries. Fault tree model of tension member, presented in this paper can be conveniently used to identify the flaws in FCM if any, in an existing structural system and also as a check list for new design of tension member.

  11. Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection

    Science.gov (United States)

    Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang

    2017-07-01

    It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the

  12. The 2016 central Italy earthquake sequence: surface effects, fault model and triggering scenarios

    Science.gov (United States)

    Chatzipetros, Alexandros; Pavlides, Spyros; Papathanassiou, George; Sboras, Sotiris; Valkaniotis, Sotiris; Georgiadis, George

    2017-04-01

    The results of fieldwork performed during the 2016 earthquake sequence around the karstic basins of Norcia and La Piana di Castelluccio, at an altitude of 1400 m, on the Monte Vettore (altitude 2476 m) and Vettoretto, as well as the three mapped seismogenic faults, striking NNW-SSW, are presented in this paper. Surface co-seismic ruptures were observed in the Vettore and Vettoretto segment of the fault for several kilometres ( 7 km) in the August earthquakes at high altitudes, and were re-activated and expanded northwards during the October earthquakes. Coseismic ruptures and the neotectonic Mt. Vettore fault zone were modelled in detail using images acquired from specifically planned UAV (drone) flights. Ruptures, typically with displacement of up to 20 cm, were observed after the August event both in the scree and weathered mantle (elluvium), as well as the bedrock, consisting mainly of fragmented carbonate rocks with small tectonic surfaces. These fractures expanded and new ones formed during the October events, typically of displacements of up to 50 cm, although locally higher displacements of up to almost 2 m were observed. Hundreds of rock falls and landslides were mapped through satellite imagery, using pre- and post- earthquake Sentinel 2A images. Several of them were also verified in the field. Based on field mapping results and seismological information, the causative faults were modelled. The model consists of five seismogenic sources, each one associated with a strong event in the sequence. The visualisation of the seismogenic sources follows INGV's DISS standards for the Individual Seismogenic Sources (ISS) layer, while strike, dip and rake of the seismic sources are obtained from selected focal mechanisms. Based on this model, the ground deformation pattern was inferred, using Okada's dislocation solution formulae, which shows that the maximum calculated vertical displacement is 0.53 m. This is in good agreement with the statistical analysis of the

  13. Using a coupled hydro-mechanical fault model to better understand the risk of induced seismicity in deep geothermal projects

    Science.gov (United States)

    Abe, Steffen; Krieger, Lars; Deckert, Hagen

    2017-04-01

    The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of

  14. Interaction of small repeating earthquakes in a rate and state fault model

    Science.gov (United States)

    Lapusta, N.; Chen, T.

    2010-12-01

    Small repeating earthquake sequences can be located very close, for example, the San Andreas Fault Observatory at Depth (SAFOD) target cluster repeaters "San Francisco" and "Los Angeles" are separated by only about 50 m. These two repeating sequences also show closeness in occurrence time, indicating substantial interaction. Modeling of the interaction of repeating sequences and comparing the modeling results with observations would help us understand the physics of fault slip. Here we conduct numerical simulations of two asperities in a rate and state fault model (Chen and Lapusta, JGR, 2009), with asperities being rate weakening and the rest of the fault being rate-strengthening. One of our goals is to create a model for the observed interaction between "San Francisco" and "Los Angeles" clusters. The study of Chen and Lapusta (JGR, 2009) and Chen et al (accepted by EPSL, 2010) showed that this approach can reproduce behavior of isolated repeating earthquake sequences, in particular, the scaling of their moment versus recurrence time and the response to accelerated postseismic creep. In this work, we investigate the effect of distance between asperities and asperity size on the interaction, in terms of occurrence time, seismic moment and rupture pattern. The fault is governed by the aging version of rate-and-state friction. To account for relatively high stress drops inferred seismically for Parkfield SAFOD target earthquakes (Dreger et al, 2007), we also conduct simulations that include enhanced dynamic weakening during seismic events. As expected based on prior studies (e.g., Kato, JGR, 2004; Kaneko et al., Nature Geoscience, 2010), the two asperities act like one asperity if they are close enough, and they behave like isolated asperities when they are sufficiently separated. Motivated by the SAFOD target repeaters that rupture separately but show evidence of interaction, we concentrate on the intermediate distance between asperities. In that regime, the

  15. Numerical modeling of fracking fluid and methane migration through fault zones in shale gas reservoirs

    Science.gov (United States)

    Taherdangkoo, Reza; Tatomir, Alexandru; Sauter, Martin

    2017-04-01

    Hydraulic fracturing operation in shale gas reservoir has gained growing interest over the last few years. Groundwater contamination is one of the most important environmental concerns that have emerged surrounding shale gas development (Reagan et al., 2015). The potential impacts of hydraulic fracturing could be studied through the possible pathways for subsurface migration of contaminants towards overlying aquifers (Kissinger et al., 2013; Myers, 2012). The intent of this study is to investigate, by means of numerical simulation, two failure scenarios which are based on the presence of a fault zone that penetrates the full thickness of overburden and connect shale gas reservoir to aquifer. Scenario 1 addresses the potential transport of fracturing fluid from the shale into the subsurface. This scenario was modeled with COMSOL Multiphysics software. Scenario 2 deals with the leakage of methane from the reservoir into the overburden. The numerical modeling of this scenario was implemented in DuMux (free and open-source software), discrete fracture model (DFM) simulator (Tatomir, 2012). The modeling results are used to evaluate the influence of several important parameters (reservoir pressure, aquifer-reservoir separation thickness, fault zone inclination, porosity, permeability, etc.) that could affect the fluid transport through the fault zone. Furthermore, we determined the main transport mechanisms and circumstances in which would allow frack fluid or methane migrate through the fault zone into geological layers. The results show that presence of a conductive fault could reduce the contaminant travel time and a significant contaminant leakage, under certain hydraulic conditions, is most likely to occur. Bibliography Kissinger, A., Helmig, R., Ebigbo, A., Class, H., Lange, T., Sauter, M., Heitfeld, M., Klünker, J., Jahnke, W., 2013. Hydraulic fracturing in unconventional gas reservoirs: risks in the geological system, part 2. Environ Earth Sci 70, 3855

  16. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    Science.gov (United States)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  17. Considering the Fault Dependency Concept with Debugging Time Lag in Software Reliability Growth Modeling Using a Power Function of Testing Time

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products. A number of SRGMs have been proposed in the literature to represent time-dependent fault identification / removal phenomenon; still new models are being proposed that could fit a greater number of reliability growth curves. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of the personnel, the size of the debugging team, the technique, and so on. Thus, the detected fault need not be immediately removed, and it may lag the fault detection process by a delay effect factor. In this paper, we first review how different software reliability growth models have been developed, where fault detection process is dependent not only on the number of residual fault content but also on the testing time, and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor. Based on the power function of the testing time concept, we propose four new SRGMs that assume the presence of two types of faults in the software: leading and dependent faults. Leading faults are those that can be removed upon a failure being observed. However, dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag. These models have been tested on real software error data to show its goodness of fit, predictive validity and applicability.

  18. Heterogeneous slip and rupture models of the San Andreas fault zone based upon three-dimensional earthquake tomography

    Energy Technology Data Exchange (ETDEWEB)

    Foxall, William [Univ. of California, Berkeley, CA (United States)

    1992-11-01

    Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.

  19. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    Science.gov (United States)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  20. Thrust fault modeling and Late-Noachian lithospheric structure of the circum-Hellas region, Mars

    Science.gov (United States)

    Egea-Gonzalez, Isabel; Jiménez-Díaz, Alberto; Parro, Laura M.; López, Valle; Williams, Jean-Pierre; Ruiz, Javier

    2017-05-01

    The circum-Hellas area of Mars borders Hellas Planitia, a giant impact ∼4.0-4.2 Ga old making the deepest and broadest depression on Mars, and is characterized by a complex pattern of fracture sets, lobate scarps, grabens, and volcanic plains. The numerous lobate scarps in the circum-Hellas region mainly formed in the Late Noachian and, except Amenthes Rupes, have been scarcely studied. In this work, we study the mechanical behavior and thermal structure of the crust in the circum-Hellas region at the time of lobate scarp formation, through the modeling of the depth of faulting beneath several prominent lobate scarps. We obtain faulting depths between ∼13 and 38 km, depending on the lobate scarp and accounting for uncertainty. These results indicate low surface and mantle heat flows in Noachian to Early Hesperian times, in agreement with heat flow estimates derived from lithospheric strength for several regions of similar age on Mars. Also, faulting depth and associate heat flows are not dependent of the local crustal thickness, which supports a stratified crust in the circum-Hellas region, with heat-producing elements concentrated in an upper layer that is thinner than the whole crust.

  1. Numerical modeling of fracking fluid migration through fault zones and fractures in the North German Basin

    Science.gov (United States)

    Pfunt, Helena; Houben, Georg; Himmelsbach, Thomas

    2016-09-01

    Gas production from shale formations by hydraulic fracturing has raised concerns about the effects on the quality of fresh groundwater. The migration of injected fracking fluids towards the surface was investigated in the North German Basin, based on the known standard lithology. This included cases with natural preferential pathways such as permeable fault zones and fracture networks. Conservative assumptions were applied in the simulation of flow and mass transport triggered by a high pressure boundary of up to 50 MPa excess pressure. The results show no significant fluid migration for a case with undisturbed cap rocks and a maximum of 41 m vertical transport within a permeable fault zone during the pressurization. Open fractures, if present, strongly control the flow field and migration; here vertical transport of fracking fluids reaches up to 200 m during hydraulic fracturing simulation. Long-term transport of the injected water was simulated for 300 years. The fracking fluid rises vertically within the fault zone up to 485 m due to buoyancy. Progressively, it is transported horizontally into sandstone layers, following the natural groundwater flow direction. In the long-term, the injected fluids are diluted to minor concentrations. Despite the presence of permeable pathways, the injected fracking fluids in the reported model did not reach near-surface aquifers, either during the hydraulic fracturing or in the long term. Therefore, the probability of impacts on shallow groundwater by the rise of fracking fluids from a deep shale-gas formation through the geological underground to the surface is small.

  2. Late Quaternary sinistral slip rate along the Altyn Tagh fault and its structural transformation model

    Institute of Scientific and Technical Information of China (English)

    XU; Xiwei; P.; Tapponnier; J.; Van; Der; Woerd; F.; J.; Ryer

    2005-01-01

    Based on technical processing of high-resolution SPOT images and aerophotos,detailed mapping of offset landforms in combination with field examination and displacement measurement, and dating of offset geomorphic surfaces by using carbon fourteen (14C), cosmogenic nuclides (10Be+26Al) and thermoluminescence (TL) methods, the Holocene sinistral slip rates on different segments of the Altyn Tagh Fault (ATF) are obtained. The slip rates reach 17.5±2 mm/a on the central and western segments west of Aksay Town, 11±3.5 mm/a on the Subei-Shibaocheng segment, 4.8± 1.0 mm/a on the Sulehe segment and only 2.2± 0.2 mm/a on the Kuantanshan segment, an easternmost segment of the ATF. The sudden change points for loss of sinistral slip rates are located at the Subei, Shibaocheng and Shulehe triple junctions where NW-trending active thrust faults splay from the ATF and propagate southeastward. Slip vector analyses indicate that the loss of the sinistral slip rates from west to east across a triple junction has structurally transformed into local crustal shortening perpendicular to the active thrust faults and strong uplifting of the thrust sheets to form the NW-trending Danghe Nanshan,Daxueshan and Qilianshan Ranges. Therefore, the eastward extrusion of the northern Qinghai-Tibetan Plateau is limited and this is in accord with "the imbricated thrusting transformation-limited extrusion model".

  3. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    Science.gov (United States)

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  4. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    Directory of Open Access Journals (Sweden)

    Weiying Wang

    2014-01-01

    Full Text Available Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  5. Blind identification of threshold auto-regressive model for machine fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Zhinong; HE Yongyong; CHU Fulei; WU Zhaotong

    2007-01-01

    A blind identification method was developed for the threshold auto-regressive (TAR) model. The method had good identification accuracy and rapid convergence, especially for higher order systems. The proposed method was then combined with the hidden Markov model (HMM) to determine the auto-regressive (AR) coefficients for each interval used for feature extraction, with the HMM as a classifier. The fault diagnoses during the speed-up and speed- down processes for rotating machinery have been success- fully completed. The result of the experiment shows that the proposed method is practical and effective.

  6. Examining the Evolution of the Peninsula Segment of the San Andreas Fault, Northern California, Using a 4-D Geologic Model

    Science.gov (United States)

    Horsman, E.; Graymer, R. W.; McLaughlin, R. J.; Jachens, R. C.; Scheirer, D. S.

    2008-12-01

    Retrodeformation of a three-dimensional geologic model allows us to explore the tectonic evolution of the Peninsula segment of the San Andreas Fault and adjacent rock bodies in the San Francisco Bay area. By using geological constraints to quantitatively retrodeform specific surfaces (e.g. unfolding paleohorizontal horizons, removing fault slip), we evaluate the geometric evolution of rock bodies and faults in the study volume and effectively create a four-dimensional model of the geology. The three-dimensional map is divided into fault-bounded blocks and subdivided into lithologic units. Surface geologic mapping provides the foundation for the model. Structural analysis and well data allow extrapolation to a few kilometers depth. Geometries of active faults are inferred from double-difference relocated earthquake hypocenters. Gravity and magnetic data provide constraints on the geometries of low density Cenozoic deposits on denser basement, highly magnetic marker units, and adjacent faults. Existing seismic refraction profiles constrain the geometries of rock bodies with different seismic velocities. Together these datasets and others allow us to construct a model of first-order geologic features in the upper ~15 km of the crust. Major features in the model include the active San Andreas Fault surface; the Pilarcitos Fault, an abandoned strand of the San Andreas; an active NE-vergent fold and thrust belt located E of the San Andreas Fault; regional relief on the basement surface; and several Cenozoic syntectonic basins. Retrodeformation of these features requires constraints from all available datasets (structure, geochronology, paleontology, etc.). Construction of the three-dimensional model and retrodeformation scenarios are non-unique, but significant insights follow from restricting the range of possible geologic histories. For example, we use the model to investigate how the crust responded to migration of the principal slip surface from the Pilarcitos Fault

  7. Novel methods for earth fault management in medium voltage distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Nikander, A.; Jaerventausta, P. [Tampere Univ. of Technology (Finland)

    1998-08-01

    Customers have become less and less tolerable against even short interruptions of supply. Rapid autoreclosures are especially harmful for those commercial and private customers who have equipment which will be disturbed by these under half second interruptions. Mainly due to increasing use of distribution automation (eg. remote controlled switching devices, fault detectors, computational fault location) the average interruption period per customer has been reduced. Simultaneously the amount of equipment sensitive to short voltage break or dip has increased. Therefore reducing the number of the interruptions has become a more essential target

  8. A fault runs through it: Modeling the influence of rock strength and grain-size distribution in a fault-damaged landscape

    Science.gov (United States)

    Roy, S. G.; Tucker, G. E.; Koons, P. O.; Smith, S. M.; Upton, P.

    2016-10-01

    We explore two ways in which the mechanical properties of rock potentially influence fluvial incision and sediment transport within a watershed: rock erodibility is inversely proportional to rock cohesion, and fracture spacing influences the initial grain sizes produced upon erosion. Fault-weakened zones show these effects well because of the sharp strength gradients associated with localized shear abrasion. A natural example of fault erosion is used to motivate our calibration of a generalized landscape evolution model. Numerical experiments are used to study the sensitivity of river erosion and transport processes to variable degrees of rock weakening. In the experiments, rapid erosion and transport of fault gouge steers surface runoff, causing high-order channels to become confined within the structure of weak zones when the relative degree of rock weakening exceeds 1 order of magnitude. Erosion of adjacent, intact bedrock produces relatively coarser grained gravels that accumulate in the low relief of the eroded weak zone. The thickness and residence time of sediments stored there depends on the relief of the valley, which in these models depends on the degree of rock weakening. The frequency with which the weak zone is armored by bed load increases with greater weakening, causing the bed load to control local channel slope. Conversely, small tributaries feeding into the weak zone are predominantly detachment limited. Our results indicate that mechanical heterogeneity can exert strong controls on rates and patterns of erosion and should be considered in future landscape evolution studies to better understand the role of heterogeneity in structuring landscapes.

  9. Sensor and Actuator Fault Detection and Isolation in Nonlinear System using Multi Model Adaptive Linear Kalman Filter

    Directory of Open Access Journals (Sweden)

    M. Manimozhi

    2014-05-01

    Full Text Available Fault Detection and Isolation (FDI using Linear Kalman Filter (LKF is not sufficient for effective monitoring of nonlinear processes. Most of the chemical plants are nonlinear in nature while operating the plant in a wide range of process variables. In this study we present an approach for designing of Multi Model Adaptive Linear Kalman Filter (MMALKF for Fault Detection and Isolation (FDI of a nonlinear system. The uses a bank of adaptive Kalman filter, with each model based on different fault hypothesis. In this study the effectiveness of the MMALKF has been demonstrated on a spherical tank system. The proposed method is detecting and isolating the sensor and actuator soft faults which occur sequentially or simultaneously.

  10. Micromechanics and statistics of slipping events in a granular seismic fault model

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, L de [Department of Information Engineering and CNISM, Second University of Naples, Aversa (Italy); Ciamarra, M Pica [CNR-SPIN, Dipartimento di Scienze Fisiche, Universita di Napoli Federico II (Italy); Lippiello, E; Godano, C, E-mail: dearcangelis@na.infn.it [Department of Environmental Sciences and CNISM, Second University of Naples, Caserta (Italy)

    2011-09-15

    The stick-slip is investigated in a seismic fault model made of a confined granular system under shear stress via three dimensional Molecular Dynamics simulations. We study the statistics of slipping events and, in particular, the dependence of the distribution on model parameters. The distribution consistently exhibits two regimes: an initial power law and a bump at large slips. The initial power law decay is in agreement with the the Gutenberg-Richter law characterizing real seismic occurrence. The exponent of the initial regime is quite independent of model parameters and its value is in agreement with experimental results. Conversely, the position of the bump is solely controlled by the ratio of the drive elastic constant and the system size. Large slips also become less probable in absence of fault gouge and tend to disappear for stiff drives. A two-time force-force correlation function, and a susceptibility related to the system response to pressure changes, characterize the micromechanics of slipping events. The correlation function unveils the micromechanical changes occurring both during microslips and slips. The mechanical susceptibility encodes the magnitude of the incoming microslip. Numerical results for the cellular-automaton version of the spring block model confirm the parameter dependence observed for size distribution in the granular model.

  11. A Dynamic Slack Management Technique for Real-Time Distributed Embedded System with Enhanced Fault Tolerance and Resource Constraints

    Directory of Open Access Journals (Sweden)

    Santhi Baskaran,

    2011-01-01

    Full Text Available This project work aims to develop a dynamic slack management technique, for real-time distributed embedded systems to reduce the total energy consumption in addition to timing, precedence and resource constraints. The Slack Distribution Technique proposed considers a modified Feedback Control Scheduling (FCS algorithm. This algorithm schedules dependent tasks effectively with precedence and resource constraints. It further minimizes the schedule length and utilizes the available slack to increase the energy efficiency. A fault tolerant mechanism uses a deferred-active-backup scheme increases the schedulability and provides reliability to the system.

  12. Enterprise Risk Management Models

    CERN Document Server

    Olson, David L

    2010-01-01

    Enterprise risk management has always been important. However, the events of the 21st Century have made it even more critical. The top level of business management became suspect after scandals at ENRON, WorldCom, and other business entities. Financially, many firms experienced difficulties from bubbles. The problems of interacting cultures demonstrated risk from terrorism as well, with numerous terrorist attacks, to include 9/11 in the U.S. Risks can arise in many facets of business. Businesses in fact exist to cope with risk in their area of specialization. Financial risk management has focu

  13. Monitoring tooth profile faults in epicyclic gearboxes using synchronously averaged motor currents: Mathematical modeling and experimental validation

    Science.gov (United States)

    Ottewill, J. R.; Ruszczyk, A.; Broda, D.

    2017-02-01

    Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.

  14. Fault diagnosis of locomotive electro-pneumatic brake through uncertain bond graph modeling and robust online monitoring

    Science.gov (United States)

    Niu, Gang; Zhao, Yajun; Defoort, Michael; Pecht, Michael

    2015-01-01

    To improve reliability, safety and efficiency, advanced methods of fault detection and diagnosis become increasingly important for many technical fields, especially for safety related complex systems like aircraft, trains, automobiles, power plants and chemical plants. This paper presents a robust fault detection and diagnostic scheme for a multi-energy domain system that integrates a model-based strategy for system fault modeling and a data-driven approach for online anomaly monitoring. The developed scheme uses LFT (linear fractional transformations)-based bond graph for physical parameter uncertainty modeling and fault simulation, and employs AAKR (auto-associative kernel regression)-based empirical estimation followed by SPRT (sequential probability ratio test)-based threshold monitoring to improve the accuracy of fault detection. Moreover, pre- and post-denoising processes are applied to eliminate the cumulative influence of parameter uncertainty and measurement uncertainty. The scheme is demonstrated on the main unit of a locomotive electro-pneumatic brake in a simulated experiment. The results show robust fault detection and diagnostic performance.

  15. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  16. From detachment to transtensional faulting: A model for the Lake Mead extensional domain based on new ages and correlation of subbasins

    Science.gov (United States)

    Beard, L.; Umhoefer, P. J.; Martin, K. L.; Blythe, N.

    2007-12-01

    New studies of selected basins in the Miocene extensional belt of the northern Lake Mead domain suggest a new model for the early extensional history of the region (lower Horse Spring Formation and correlative strata). Critical data are from (i) Longwell Ridges area west of Overton Arm and within the Lake Mead fault system, (ii) Salt Spring Wash basin in the hanging wall of the South Virgin-White Hills detachment (SVWHD) fault, and (iii) previously studied subbasins of the south Virgin Mountains in the Gold Butte step-over region. The basins and faulting patterns suggest two stages of basin development related to two distinct faulting episodes, an early period of detachment faulting followed by a switch to faulting mainly along the Lake Mead transtensional fault system while detachment faulting waned. Apatite fission track ages suggest the footwall block of the SVWHD was cooling at 18-17 Ma, but the only evidence for basin deposition at that time is in the Gold Butte step-over where slow rates of sedimentation and facies patterns make faulting on the north side of the Gold Butte block ambiguous. The first basin stage was ca. 16.5 to 15.5 Ma, during which there was slow to moderate faulting and subsidence in a basin along the SVWHD and north of Gold Butte block in the Gold Butte step-over basin; the step- over basin had complex fluvial and lacustrine facies and was synchronous with landslides and debris flows in front of the SVWHD. At ca. 15.5-14.5 Ma, there was a [dramatic] increase in sedimentation rate related to formation of the Gold Butte fault, a change from lacustrine to widespread fluvial, playa, and local landslide facies in the step-over basin, and the peak of exhumation and faulting rates on the SVWHD. The simple step-over basin broke up into numerous subbasins [at[ as initial faults of the Lake Mead fault system formed. From 14.5 to 14.0 Ma, there was completion of a major change from dominantly detachment faulting to dominantly transtensional faulting

  17. Tsunamigenic earthquakes in the Gulf of Cadiz: fault model and recurrence

    Directory of Open Access Journals (Sweden)

    L. M. Matias

    2013-01-01

    Full Text Available The Gulf of Cadiz, as part of the Azores-Gibraltar plate boundary, is recognized as a potential source of big earthquakes and tsunamis that may affect the bordering countries, as occurred on 1 November 1755. Preparing for the future, Portugal is establishing a national tsunami warning system in which the threat caused by any large-magnitude earthquake in the area is estimated from a comprehensive database of scenarios. In this paper we summarize the knowledge about the active tectonics in the Gulf of Cadiz and integrate the available seismological information in order to propose the generation model of destructive tsunamis to be applied in tsunami warnings. The fault model derived is then used to estimate the recurrence of large earthquakes using the fault slip rates obtained by Cunha et al. (2012 from thin-sheet neotectonic modelling. Finally we evaluate the consistency of seismicity rates derived from historical and instrumental catalogues with the convergence rates between Eurasia and Nubia given by plate kinematic models.

  18. Nonlinear dynamic modeling of a helicopter planetary gear train for carrier plate crack fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    Fan Lei; Wang Shaoping; Wang Xingjian; Han Feng; Lyu Huawei

    2016-01-01

    Planetary gear train plays a significant role in a helicopter operation and its health is of great importance for the flight safety of the helicopter. This paper investigates the effects of a planet carrier plate crack on the dynamic characteristics of a planetary gear train, and thus finds an effec-tive method to diagnose crack fault. A dynamic model is developed to analyze the torsional vibra-tion of a planetary gear train with a cracked planet carrier plate. The model takes into consideration nonlinear factors such as the time-varying meshing stiffness, gear backlash and viscous damping. Investigation of the deformation of the cracked carrier plate under static stress is performed in order to simulate the dynamic effects of the planet carrier crack on the angular displacement of car-rier posts. Validation shows good accuracy of the developed dynamic model in predicting dynamic characteristics of a planetary gear train. Fault features extracted from predictions of the model reveal the correspondence between vibration characteristic and the conditions (length and position) of a planet carrier crack clearly.

  19. Nonlinear dynamic modeling of a helicopter planetary gear train for carrier plate crack fault diagnosis

    Directory of Open Access Journals (Sweden)

    Fan Lei

    2016-06-01

    Full Text Available Planetary gear train plays a significant role in a helicopter operation and its health is of great importance for the flight safety of the helicopter. This paper investigates the effects of a planet carrier plate crack on the dynamic characteristics of a planetary gear train, and thus finds an effective method to diagnose crack fault. A dynamic model is developed to analyze the torsional vibration of a planetary gear train with a cracked planet carrier plate. The model takes into consideration nonlinear factors such as the time-varying meshing stiffness, gear backlash and viscous damping. Investigation of the deformation of the cracked carrier plate under static stress is performed in order to simulate the dynamic effects of the planet carrier crack on the angular displacement of carrier posts. Validation shows good accuracy of the developed dynamic model in predicting dynamic characteristics of a planetary gear train. Fault features extracted from predictions of the model reveal the correspondence between vibration characteristic and the conditions (length and position of a planet carrier crack clearly.

  20. Geomechanical Modeling of Fault Responses and the Potential for Notable Seismic Events during Underground CO2 Injection

    Science.gov (United States)

    Rutqvist, J.; Cappa, F.; Mazzoldi, A.; Rinaldi, A.

    2012-12-01

    The importance of geomechanics associated with large-scale geologic carbon storage (GCS) operations is now widely recognized. There are concerns related to the potential for triggering notable (felt) seismic events and how such events could impact the long-term integrity of a CO2 repository (as well as how it could impact the public perception of GCS). In this context, we review a number of modeling studies and field observations related to the potential for injection-induced fault reactivations and seismic events. We present recent model simulations of CO2 injection and fault reactivation, including both aseismic and seismic fault responses. The model simulations were conducted using a slip weakening fault model enabling sudden (seismic) fault rupture, and some of the numerical analyses were extended to fully dynamic modeling of seismic source, wave propagation, and ground motion. The model simulations illustrated what it will take to create a magnitude 3 or 4 earthquake that would not result in any significant damage at the groundsurface, but could raise concerns in the local community and could also affect the deep containment of the stored CO2. The analyses show that the local in situ stress field, fault orientation, fault strength, and injection induced overpressure are critical factors in determining the likelihood and magnitude of such an event. We like to clarify though that in our modeling we had to apply very high injection pressure to be able to intentionally induce any fault reactivation. Consequently, our model simulations represent extreme cases, which in a real GCS operation could be avoided by estimating maximum sustainable injection pressure and carefully controlling the injection pressure. In fact, no notable seismic event has been reported from any of the current CO2 storage projects, although some unfelt microseismic activities have been detected by geophones. On the other hand, potential future commercial GCS operations from large power plants

  1. Imprecise Computation Based Real-time Fault Tolerant Implementation for Model Predictive Control

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is proposed for MPC,according to the solving process of quadratic programming (QP) problem. In this algorithm, system stability is guaranteed even when computation resource is not enough to finish optimization completely. By this kind of graceful degradation, the behavior of real-time control systems is still predictable and determinate. The algorithm is demonstrated by experiments on servomotor, and the simulation results show its effectiveness.

  2. Fault tree modeling of AAC power source in multi-unit nuclear power plants PSA

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Lim, Ho-Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    Dependencies between units are important to estimate a risk of a multi-unit site. One of dependencies is a shared system such as an alternating AC (AAC) power source. Because one AAC can support a single unit, it is necessary to appropriately treat such behavior of the AAC in multi-unit probabilistic safety assessment (PSA). The behavior of AAC in multi-unit site would show dynamic characteristics. For example, several units require the AAC at the same time. It is hard to decide which unit the AAC is connected to. It can vary depending on timing of station blackout (SBO), with time delay when emergency diesel generators fail while running. It is not easy to handle dynamic behavior using the static fault tree methodology. Typical way of estimating risk for multi-unit regarding to AAC is to assume that only one unit has AAC and the others does not. KIM calculates the risk for each unit and uses the average value from the results. Jung derives an equation to calculate the SBO frequency by considering all the combination of loss of offsite power and failure of emergency diesel generators in multi-unit site. It is also assumed that the AAC is connected to a pre-decided unit. We are developing a PSA model for multi-unit site for internal and external events. An extreme external hazard may result in loss of all offsite power in a site, where the appropriate modeling of an AAC becomes important. The static fault tree methodology is not good for dynamic situation. But, it can turn into a simple problem if an assumption is made: - The connecting order of AAC is pre-decided. This study provides an idea how to model AAC for each unit in the form of a fault tree, assuming the connecting order of AAC is given. This study illustrates how to model a fault tree for AAC in a multi-unit site. It provides an idea how to handle a shared system in multi-unit PSA, for such a case as loss of all offsite power in a site due to an extreme external hazard.

  3. Effect of Pore Pressure on Slip Failure of an Impermeable Fault: A Coupled Micro Hydro-Geomechanical Model

    Science.gov (United States)

    Yang, Z.; Juanes, R.

    2015-12-01

    The geomechanical processes associated with subsurface fluid injection/extraction is of central importance for many industrial operations related to energy and water resources. However, the mechanisms controlling the stability and slip motion of a preexisting geologic fault remain poorly understood and are critical for the assessment of seismic risk. In this work, we develop a coupled hydro-geomechanical model to investigate the effect of fluid injection induced pressure perturbation on the slip behavior of a sealing fault. The model couples single-phase flow in the pores and mechanics of the solid phase. Granular packs (see example in Fig. 1a) are numerically generated where the grains can be either bonded or not, depending on the degree of cementation. A pore network is extracted for each granular pack with pore body volumes and pore throat conductivities calculated rigorously based on geometry of the local pore space. The pore fluid pressure is solved via an explicit scheme, taking into account the effect of deformation of the solid matrix. The mechanics part of the model is solved using the discrete element method (DEM). We first test the validity of the model with regard to the classical one-dimensional consolidation problem where an analytical solution exists. We then demonstrate the ability of the coupled model to reproduce rock deformation behavior measured in triaxial laboratory tests under the influence of pore pressure. We proceed to study the fault stability in presence of a pressure discontinuity across the impermeable fault which is implemented as a plane with its intersected pore throats being deactivated and thus obstructing fluid flow (Fig. 1b, c). We focus on the onset of shear failure along preexisting faults. We discuss the fault stability criterion in light of the numerical results obtained from the DEM simulations coupled with pore fluid flow. The implication on how should faults be treated in a large-scale continuum model is also presented.

  4. Kinematic source model for simulation of near-fault ground motion field using explicit finite element method

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiaozhi; Hu Jinjun; Xie Lili; Wang Haiyun

    2006-01-01

    This paper briefly reviews the characteristics and major processes of the explicit finite element method in modeling the near-fault ground motion field. The emphasis is on the finite element-related problems in the finite fault source modeling. A modified kinematic source model is presented, in which vibration with some high frequency components is introduced into the traditional slip time function to ensure that the source and ground motion include sufficient high frequency components. The model presented is verified through a simple modeling example. It is shown that the predicted near-fault ground motion field exhibits similar characteristics to those observed in strong motion records, such as the hanging wall effect, vertical effect, fling step effect and velocity pulse effect, etc.

  5. Hierarchical Fault Diagnosis for a Hybrid System Based on a Multidomain Model

    Directory of Open Access Journals (Sweden)

    Jiming Ma

    2015-01-01

    Full Text Available The diagnosis procedure is performed by integrating three steps: multidomain modeling, event identification, and failure event classification. Multidomain model can describe the normal and fault behaviors of hybrid systems efficiently and can meet the diagnosis requirements of hybrid systems. Then the multidomain model is used to simulate and obtain responses under different failure events; the responses are further utilized as a priori information when training the event identification library. Finally, a brushless DC motor is selected as the study case. The experimental result indicates that the proposed method could identify the known and unknown failure events of the studied system. In particular, for a system with less response information under a failure event, the accuracy of diagnosis seems to be higher. The presented method integrates the advantages of current quantitative and qualitative diagnostic procedures and can distinguish between failures caused by parametric and abrupt structure faults. Another advantage of our method is that it can remember unknown failure types and automatically extend the adaptive resonance theory neural network library, which is extremely useful for complex hybrid systems.

  6. Estimation of fault parameters using GRACE observations and analytical model. Case study: The 2010 Chile earthquake

    Science.gov (United States)

    Fatolazadeh, Farzam; Naeeni, Mehdi Raoofian; Voosoghi, Behzad; Rahimi, Armin

    2017-07-01

    In this study, an inversion method is used to constrain the fault parameters of the 2010 Chile Earthquake using gravimetric observations. The formulation consists of using monthly Geopotential coefficients of GRACE observations in a conjunction with the analytical model of Okubo 1992 which accounts for the gravity changes resulting from Earthquake. At first, it is necessary to eliminate the hydrological and oceanic effects from GRACE monthly coefficients and then a spatio-spectral localization analysis, based on wavelet local analysis, should be used to filter the GRACE observations and to better refine the tectonic signal. Finally, the corrected GRACE observations are compared with the analytical model using a nonlinear inversion algorithm. Our results show discernible differences between the computed average slip using gravity observations and those predicted from other co-seismic models. In this study, fault parameters such as length, width, depth, dip, strike and slip are computed using the changes in gravity and gravity gradient components. By using the variations of gravity gradient components the above mentioned parameters are determined as 428 ± 6 Km, 203 ± 5 Km, 5 Km, 10°, 13° and 8 ± 1.2 m respectively. Moreover, the values of the seismic moment and moment magnitude are 2. 09 × 1022 N m and 8.88 Mw respectively which show the small differences with the values reported from USGS (1. 8 × 1022N m and 8.83 Mw).

  7. Electrical and thermal finite element modeling of arc faults in photovoltaic bypass diodes.

    Energy Technology Data Exchange (ETDEWEB)

    Bower, Ward Isaac; Quintana, Michael A.; Johnson, Jay

    2012-01-01

    Arc faults in photovoltaic (PV) modules have caused multiple rooftop fires. The arc generates a high-temperature plasma that ignites surrounding materials and subsequently spreads the fire to the building structure. While there are many possible locations in PV systems and PV modules where arcs could initiate, bypass diodes have been suspected of triggering arc faults in some modules. In order to understand the electrical and thermal phenomena associated with these events, a finite element model of a busbar and diode was created. Thermoelectrical simulations found Joule and internal diode heating from normal operation would not normally cause bypass diode or solder failures. However, if corrosion increased the contact resistance in the solder connection between the busbar and the diode leads, enough voltage potentially would be established to arc across micron-scale electrode gaps. Lastly, an analytical arc radiation model based on observed data was employed to predicted polymer ignition times. The model predicted polymer materials in the adjacent area of the diode and junction box ignite in less than 0.1 seconds.

  8. Fault detection and identification in dynamic systems with noisy data and parameter/modeling uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Dinca, Laurian; Aldemir, Tunc; Rizzoni, Giorgio

    1999-06-01

    A probabilistic approach is presented which can be used for the estimation of system parameters and unmonitored state variables towards model-based fault diagnosis in dynamic systems. The method can be used with any type of input-output model and can accommodate noisy data and/or parameter/modeling uncertainties. The methodology is based on Markovian representation of system dynamics in discretized state space. The example system used for the illustration of the methodology focuses on the intake, fueling, combustion and exhaust components of internal combustion engines. The results show that the methodology is capable of estimating the system parameters and tracking the unmonitored dynamic variables within user-specified magnitude intervals (which may reflect noise in the monitored data, random changes in the parameters or modeling uncertainties in general) within data collection time and hence has potential for on-line implementation.

  9. A numerical modelling approach to investigate the surface processes response to normal fault growth in multi-rift settings

    Science.gov (United States)

    Pechlivanidou, Sofia; Cowie, Patience; Finch, Emma; Gawthorpe, Robert; Attal, Mikael

    2016-04-01

    This study uses a numerical modelling approach to explore structural controls on erosional/depositional systems within rifts that are characterized by complex multiphase extensional histories. Multiphase-rift related topography is generated by a 3D discrete element model (Finch et al., Basin Res., 2004) of normal fault growth and is used to drive the landscape evolution model CHILD (Tucker et al., Comput. Geosci., 2001). Fault populations develop spontaneously in the discrete element model and grow by both tip propagation and segment linkage. We conduct a series of experiments to simulate the evolution of the landscape (55x40 km) produced by two extensional phases that differ in the direction and in the amount of extension. In order to isolate the effects of fault propagation on the drainage network development, we conduct experiments where uplift/subsidence rates vary both in space and time as the fault array evolves and compare these results with experiments using a fixed fault array geometry with uplift rate/subsidence rates that vary only spatially. In many cases, areas of sediment deposition become uplifted and vise-versa due to complex elevation changes with respect to sea level as the fault array develops. These changes from subaerial (erosional) to submarine (depositional) processes have implications for sediment volumes and sediment caliber as well as for the sediment routing systems across the rift. We also explore the consequences of changing the angle between the two phases of extension on the depositional systems and we make a comparison with single-phase rift systems. Finally, we discuss the controls of different erodibilities on sediment supply and detachment-limited versus transport-limited end-member models for river erosion. Our results provide insights into the nature and distribution of sediment source areas and the sediment routing in rift systems where pre-existing rift topography and normal fault growth exert a fundamental control on

  10. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  11. Modeling the Fault Tolerant Capability of a Flight Control System: An Exercise in SCR Specification

    Science.gov (United States)

    Alexander, Chris; Cortellessa, Vittorio; DelGobbo, Diego; Mili, Ali; Napolitano, Marcello

    2000-01-01

    In life-critical and mission-critical applications, it is important to make provisions for a wide range of contingencies, by providing means for fault tolerance. In this paper, we discuss the specification of a flight control system that is fault tolerant with respect to sensor faults. Redundancy is provided by analytical relations that hold between sensor readings; depending on the conditions, this redundancy can be used to detect, identify and accommodate sensor faults.

  12. An objective mechanical modelling approach for estimating the distribution of fault creep and locking from geodetic data

    Science.gov (United States)

    Funning, Gareth; Burgmann, Roland

    2017-04-01

    Knowledge of the extents of locked areas on faults is a critical input to seismic hazard assessments, defining possible asperities for future earthquakes. On partially creeping faults, such as those found in California, Turkey and in several major subduction zones, these locked zones can be identified by studying the distribution and extent of creep on those faults. Such creep produces surface deformation that can be measured geodetically (e.g. by InSAR and GPS), and used as a constraint on geophysical models. We present a Markov Chain Monte Carlo method, based on mechanical boundary element modelling of geodetic data, for finding the extents of creeping fault areas. In our scheme, the surface of a partially-creeping fault is represented as a mesh of triangular elements, each of which is modelled as either locked or creeping (freely-slipping) using the boundary element code poly3d. Slip on the creeping elements of our fault mesh, and therefore elastic deformation of the surface, is driven by stresses imparted by semi-infinite faults beneath the base of the mesh (and any other faults in the region of interest) that slip at their geodetic interseismic slip rates. Starting from a random distribution of locked and unlocked patches, a modified Metropolis algorithm is used to propose changes to the locking state (i.e., from locked to creeping, or vice-versa) of randomly selected elements, retaining or discarding these based on a geodetic data misfit criterion; the succession of accepted models forms a Markov chain of model states. After a 'burn-in' period of a few hundred samples, these Markov chains sample a region of parameter space close to the minimum misfit configuration. By computing Markov chains of a million samples, we can realise multiple such well-fitting models, and look for robustly resolved features (i.e., features common to a majority of the models, and/or present in the mean of those models). We apply this method to a combination of persistent scatterer

  13. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    Science.gov (United States)

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  14. Tectonic role of margin-parallel and margin-transverse faults during oblique subduction in the Southern Volcanic Zone of the Andes: Insights from Boundary Element Modeling

    Science.gov (United States)

    Stanton-Yonge, A.; Griffith, W. A.; Cembrano, J.; St. Julien, R.; Iturrieta, P.

    2016-09-01

    Obliquely convergent subduction margins develop trench-parallel faults shaping the regional architecture of orogenic belts and partitioning intraplate deformation. However, transverse faults also are common along most orogenic belts and have been largely neglected in slip partitioning analysis. Here we constrain the sense of slip and slip rates of differently oriented faults to assess whether and how transverse faults accommodate plate-margin slip arising from oblique subduction. We implement a forward 3-D boundary element method model of subduction at the Chilean margin evaluating the elastic response of intra-arc faults during different stages of the Andean subduction seismic cycle (SSC). Our model results show that the margin-parallel, NNE striking Liquiñe-Ofqui Fault System accommodates dextral-reverse slip during the interseismic period of the SSC, with oblique slip rates ranging between 1 and 7 mm/yr. NW striking faults exhibit sinistral-reverse slip during the interseismic phase of the SSC, displaying a maximum oblique slip of 1.4 mm/yr. ENE striking faults display dextral strike slip, with a slip rate of 0.85 mm/yr. During the SSC coseismic phase, all modeled faults switch their kinematics: NE striking fault become sinistral, whereas NW striking faults are normal dextral. Because coseismic tensile stress changes on NW faults reach 0.6 MPa at 10-15 km depth, it is likely that they can serve as transient magma pathways during this phase of the SSC. Our model challenges the existing paradigm wherein only margin-parallel faults account for slip partitioning: transverse faults are also capable of accommodating a significant amount of plate-boundary slip arising from oblique convergence.

  15. 3D Modelling of Seismically Active Parts of Underground Faults via Seismic Data Mining

    Science.gov (United States)

    Frantzeskakis, Theofanis; Konstantaras, Anthony

    2015-04-01

    During the last few years rapid steps have been taken towards drilling for oil in the western Mediterranean sea. Since most of the countries in the region benefit mainly from tourism and considering that the Mediterranean is a closed sea only replenishing its water once every ninety years careful measures are being taken to ensure safe drilling. In that concept this research work attempts to derive a three dimensional model of the seismically active parts of the underlying underground faults in areas of petroleum interest. For that purpose seismic spatio-temporal clustering has been applied to seismic data to identify potential distinct seismic regions in the area of interest. Results have been coalesced with two dimensional maps of underground faults from past surveys and seismic epicentres, having followed careful reallocation processing, have been used to provide information regarding the vertical extent of multiple underground faults in the region of interest. The end product is a three dimensional map of the possible underground location and extent of the seismically active parts of underground faults. Indexing terms: underground faults modelling, seismic data mining, 3D visualisation, active seismic source mapping, seismic hazard evaluation, dangerous phenomena modelling Acknowledgment This research work is supported by the ESPA Operational Programme, Education and Life Long Learning, Students Practical Placement Initiative. References [1] Alves, T.M., Kokinou, E. and Zodiatis, G.: 'A three-step model to assess shoreline and offshore susceptibility to oil spills: The South Aegean (Crete) as an analogue for confined marine basins', Marine Pollution Bulletin, In Press, 2014 [2] Ciappa, A., Costabile, S.: 'Oil spill hazard assessment using a reverse trajectory method for the Egadi marine protected area (Central Mediterranean Sea)', Marine Pollution Bulletin, vol. 84 (1-2), pp. 44-55, 2014 [3] Ganas, A., Karastathis, V., Moshou, A., Valkaniotis, S., Mouzakiotis

  16. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  17. Performance analysis of a dependable scheduling strategy based on a fault-tolerant grid model

    Institute of Scientific and Technical Information of China (English)

    WANG Yuanzhuo; LIN Chuang; YANG Yang; SHAN Zhiguang

    2007-01-01

    The grid provides an integrated computer platform composed of differentiated and distributed systems.These resources are dynamic and heterogeneous.In this paper,a novel fault-tolerant grid-scheduling model is pre sented based on Stochastic Petri Nets (SPN) to assure the heterogeneity and dynamism of the grid system.Also,a new grid-scheduling strategy,the dependable strategy for the shortest expected accomplishing time (DSEAT),is put forward,in which the dependability factor is introduced in the task-dispatching strategy.In the end,the performance of the scheduling strategy based on the fault-tolerant gridscheduling model is analyzed by an software package,named SPNP.The numerical results show that dynamic resources will increase the response time for all classes of tasks in differing degrees.Compared with shortest expected accomplishing time (SEAT) strategy,the DSEAT strategy can reduce the negative effects of dynamic and autonomic resources to some extent so as to guarantee a high quality of service (QoS).

  18. Numerical Modeling of Exploitation Relics and Faults Influence on Rock Mass Deformations

    Science.gov (United States)

    Wesołowski, Marek

    2016-12-01

    This article presents numerical modeling results of fault planes and exploitation relics influenced by the size and distribution of rock mass and surface area deformations. Numerical calculations were performed using the finite difference program FLAC. To assess the changes taking place in a rock mass, an anisotropic elasto-plastic ubiquitous joint model was used, into which the Coulomb-Mohr strength (plasticity) condition was implemented. The article takes as an example the actual exploitation of the longwall 225 area in the seam 502wg of the "Pokój" coal mine. Computer simulations have shown that it is possible to determine the influence of fault planes and exploitation relics on the size and distribution of rock mass and its surface deformation. The main factor causing additional deformations of the area surface are the abandoned workings in the seam 502wd. These abandoned workings are the activation factor that caused additional subsidences and also, due to the significant dip, they are a layer on which the rock mass slides down in the direction of the extracted space. These factors are not taken into account by the geometrical and integral theories.

  19. Backpropagation Neural Network Modeling for Fault Location in Transmission Line 150 kV

    Directory of Open Access Journals (Sweden)

    Azriyenni Narwan

    2014-03-01

    Full Text Available In this topic research was provided about the backpropagation neural network to detect fault location in transmission line 150 kV between substation to substation. The distance relay is one of the good protective device and safety devices that often used on transmission line 150 kV. The disturbances in power system are used distance relay protection equipment in the transmission line. However, it needs more increasing large load and network systems are increasing complex. The protection system use the digital control, in order to avoid the error calculation of the distance relay impedance settings and spent time will be more efficient. Then backpropagation neural network is a computational model that uses the training process that can be used to solve the problem of work limitations of distance protection relays. The backpropagation neural network does not have limitations cause of the impedance range setting. If the output gives the wrong result, so the correct of the weights can be minimized and also the response of galat, the backpropagation neural network is expected to be closer to the correct value. In the end, backpropagation neural network modeling is expected to detect the fault location and identify operational output current circuit breaker was tripped it. The tests are performance with interconnected system 150 kV of Riau Region.

  20. Storm Water Management Model (SWMM)

    Science.gov (United States)

    EPA's Storm Water Management Model (SWMM) is used throughout the world for planning, analysis and design related to stormwater runoff, combined and sanitary sewers, and other drainage systems in urban areas.

  1. Design Methods and Practices for Fault Prevention and Management in Spacecraft

    Science.gov (United States)

    Tumer, Irem Y.

    2005-01-01

    Integrated Systems Health Management (ISHM) is intended to become a critical capability for all space, lunar and planetary exploration vehicles and systems at NASA. Monitoring and managing the health state of diverse components, subsystems, and systems is a difficult task that will become more challenging when implemented for long-term, evolving deployments. A key technical challenge will be to ensure that the ISHM technologies are reliable, effective, and low cost, resulting in turn in safe, reliable, and affordable missions. To ensure safety and reliability, ISHM functionality, decisions and knowledge have to be incorporated into the product lifecycle as early as possible, and ISHM must be considered as an essential element of models developed and used in various stages during system design. During early stage design, many decisions and tasks are still open, including sensor and measurement point selection, modeling and model-checking, diagnosis, signature and data fusion schemes, presenting the best opportunity to catch and prevent potential failures and anomalies in a cost-effective way. Using appropriate formal methods during early design, the design teams can systematically explore risks without committing to design decisions too early. However, the nature of ISHM knowledge and data is detailed, relying on high-fidelity, detailed models, whereas the earlier stages of the product lifecycle utilize low-fidelity, high-level models of systems and their functionality. We currently lack the tools and processes necessary for integrating ISHM into the vehicle system/subsystem design. As a result, most existing ISHM-like technologies are retrofits that were done after the system design was completed. It is very expensive, and sometimes futile, to retrofit a system health management capability into existing systems. Last-minute retrofits result in unreliable systems, ineffective solutions, and excessive costs (e.g., Space Shuttle TPS monitoring which was considered

  2. Design Methods and Practices for Fault Prevention and Management in Spacecraft

    Science.gov (United States)

    Tumer, Irem Y.

    2005-01-01

    Integrated Systems Health Management (ISHM) is intended to become a critical capability for all space, lunar and planetary exploration vehicles and systems at NASA. Monitoring and managing the health state of diverse components, subsystems, and systems is a difficult task that will become more challenging when implemented for long-term, evolving deployments. A key technical challenge will be to ensure that the ISHM technologies are reliable, effective, and low cost, resulting in turn in safe, reliable, and affordable missions. To ensure safety and reliability, ISHM functionality, decisions and knowledge have to be incorporated into the product lifecycle as early as possible, and ISHM must be considered as an essential element of models developed and used in various stages during system design. During early stage design, many decisions and tasks are still open, including sensor and measurement point selection, modeling and model-checking, diagnosis, signature and data fusion schemes, presenting the best opportunity to catch and prevent potential failures and anomalies in a cost-effective way. Using appropriate formal methods during early design, the design teams can systematically explore risks without committing to design decisions too early. However, the nature of ISHM knowledge and data is detailed, relying on high-fidelity, detailed models, whereas the earlier stages of the product lifecycle utilize low-fidelity, high-level models of systems and their functionality. We currently lack the tools and processes necessary for integrating ISHM into the vehicle system/subsystem design. As a result, most existing ISHM-like technologies are retrofits that were done after the system design was completed. It is very expensive, and sometimes futile, to retrofit a system health management capability into existing systems. Last-minute retrofits result in unreliable systems, ineffective solutions, and excessive costs (e.g., Space Shuttle TPS monitoring which was considered

  3. Design of sensor and actuator multi model fault detection and isolation system using state space neural networks

    Science.gov (United States)

    Czajkowski, Andrzej

    2015-11-01

    This paper deals with the application of state space neural network model to design a Fault Detection and Isolation diagnostic system. The work describes approach based on multimodel solution where the SIMO process is decomposed into simple models (SISO and MISO). With such models it is possible to generate different residual signals which later can be evaluated with simple thresholding method into diagnostic signals. Further, such diagnostic signals with the application of Binary Diagnostic Table (BDT) can be used to fault isolation. All data used in experiments is obtain from the simulator of the real-time laboratory stand of Modular Servo under Matlab/Simulink environment.

  4. A Poisson-Fault Model for Testing Power Transformers in Service

    Directory of Open Access Journals (Sweden)

    Dengfu Zhao

    2014-01-01

    Full Text Available This paper presents a method for assessing the instant failure rate of a power transformer under different working conditions. The method can be applied to a dataset of a power transformer under periodic inspections and maintenance. We use a Poisson-fault model to describe failures of a power transformer. When investigating a Bayes estimate of the instant failure rate under the model, we find that complexities of a classical method and a Monte Carlo simulation are unacceptable. Through establishing a new filtered estimate of Poisson process observations, we propose a quick algorithm of the Bayes estimate of the instant failure rate. The proposed algorithm is tested by simulation datasets of a power transformer. For these datasets, the proposed estimators of parameters of the model have better performance than other estimators. The simulation results reveal the suggested algorithms are quickest among three candidates.

  5. Takagi-Sugeno fuzzy-model-based fault detection for networked control systems with Markov delays.

    Science.gov (United States)

    Zheng, Ying; Fang, Huajing; Wang, Hua O

    2006-08-01

    A Takagi-Sugeno (T-S) model is employed to represent a networked control system (NCS) with different network-induced delays. Comparing with existing NCS modeling methods, this approach does not require the knowledge of exact values of network-induced delays. Instead, it addresses situations involving all possible network-induced delays. Moreover, this approach also handles data-packet loss. As an application of the T-S-based modeling method, a parity-equation approach and a fuzzy-observer-based approach for fault detection of an NCS were developed. An example of a two-link inverted pendulum is used to illustrate the utility and viability of the proposed approaches.

  6. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    Science.gov (United States)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  7. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    Directory of Open Access Journals (Sweden)

    Fang Liu

    2014-05-01

    Full Text Available A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects.

  8. Wayside bearing fault diagnosis based on a data-driven Doppler effect eliminator and transient model analysis.

    Science.gov (United States)

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-05-05

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects.

  9. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  10. Progressive Development of Riedel-Shear on Overburden Soil by Strike-Slip Faulting: Insights from Analogue Model

    Science.gov (United States)

    Chan, Pei-Chen; Wong, Pei-Syuan; Lin, Ming-Lang

    2015-04-01

    According to the investigations of well-known disastrous earthquakes in recent years, ground deformation (ground strain and surface rupture) induced by faulting is one of the causes for engineering structure damages in addition to strong ground motion. However, development and propagation of shear zone were effect of increasing amounts of basal slip faulting. Therefore, mechanisms of near ground deformation due to faulting, and its effect on engineering structures within the influenced zone are worthy of further study. In strike-slip faults model, type of rupture propagation and width of shear zone (W) are primary affecting by material properties (M) and depth (H) of overburden layer, distances of fault slip (Sy) (Lin, A., and Nishikawa, M.,2011, Narges K. et al, 2014). There are few research on trace of development and propagation of trace tip, trace length, and rupture spacing. In this research, we used sandbox model to study the progressive development of riedel-shear on overburden soil by strike-slip faulting. The model can be used to investigate the control factors of the deformation characteristics (such as the evolution of surface rupture). To understand the deformation characteristics (including development and propagation of trace tip(Tt), trace length(Tl), rupture spacing(Ts)) during the early stages of deformation by faulting. We found that an increase in fault slip Sy could result in a greater W, trace length, rupture density and proposed a Tl/H versus Sy/H relationship. Progressive development of riedel-shear showed a similar trend as in the literature that the increase of fault slip resulted in the reduction of Ts, however, the increasing trend became opposite after a peak value of W was reached. The above approaches benefit us in enhancing our understanding on how propagation of fault-tip affects the width of deformation zone near the ground of the soil/rock mass, the spatial distribution of strain and stress within the influenced zone, and the

  11. Product Knowledge Modelling and Management

    DEFF Research Database (Denmark)

    Zhang, Y.; MacCallum, K. J.; Duffy, Alex

    1996-01-01

    The term, Product Knowledge is used to refer to two related but distinct concepts; the knowledge of a specific product (Specific Product Knowledge) and the knowledge of a product domain (Product Domain Knowledge). Modelling and managing Product Knowlege is an essential part of carrying out design...... function-oriented design. Both Specific Product Knowledge and Product Domain Knowledge are modelled at two levels, a meta-model and an information-level.Following that, a computer-based scheme to manage the proposed product lknowledge models within a dynamically changing environment is presented....

  12. A renormalization group model for the stick-slip behavior of faults

    Science.gov (United States)

    Smalley, R. F., Jr.; Turcotte, D. L.; Solla, S. A.

    1983-01-01

    A fault which is treated as an array of asperities with a prescribed statistical distribution of strengths is described. For a linear array the stress is transferred to a single adjacent asperity and for a two dimensional array to three ajacent asperities. It is shown that the solutions bifurcate at a critical applied stress. At stresses less than the critical stress virtually no asperities fail on a large scale and the fault is locked. At the critical stress the solution bifurcates and asperity failure cascades away from the nucleus of failure. It is found that the stick slip behavior of most faults can be attributed to the distribution of asperities on the fault. The observation of stick slip behavior on faults rather than stable sliding, why the observed level of seismicity on a locked fault is very small, and why the stress on a fault is less than that predicted by a standard value of the coefficient of friction are outlined.

  13. Trying to understand management models

    DEFF Research Database (Denmark)

    Jonker, Jan; van Pijkeren, Michel; Eskildsen, Jacob Kjær

    2009-01-01

    In the previous chapters a number of management models have been presented. What ties them together is the fact that companies have created them in order to address present and future organisational challenges. We have chosen to show them as they are, with as much as possible respect to differences...... in cultural, contextual and linguistic backgrounds. Now that this volume comes to a close, this final chapter will be used to shed some light on the nature of the models presented previously. We therefore set out to explore their character and functionality, thus enabling to identify what these models have...... in common but also how they can be distinguished from each other. Our aim of this exploration is to grasp the more fundamental, conceptual and theoretical aspects of management models. This exploration is guided by a number of questions. How can management models be classified and categorised? What...

  14. Trying to understand management models

    DEFF Research Database (Denmark)

    van Pijkeren, Michel; Eskildsen, Jacob Kjær

    2009-01-01

    In the previous chapters a number of management models have been presented. What ties them together is the fact that companies have created them in order to address present and future organisational challenges. We have chosen to show them as they are, with as much as possible respect to differences...... in common but also how they can be distinguished from each other. Our aim of this exploration is to grasp the more fundamental, conceptual and theoretical aspects of management models. This exploration is guided by a number of questions. How can management models be classified and categorised? What...... principles are underpinning? What role do these models play in organisations and what are different functions they might have? All these questions drill down to: "which theoretical perspectives can be distilled when analysing the models presented in this volume?" For sure we will not be able to elaborate all...

  15. Quantizing the Complexity of the Western United States Fault System with Geodetically and Geologically Constrained Block Models

    Science.gov (United States)

    Evans, E. L.; Meade, B. J.

    2014-12-01

    Geodetic observations of interseismic deformation provide constraints on miroplate rotations, earthquake cycle processes, slip partitioning, and the geometric complexity of the Pacific-North America plate boundary. Paleoseismological observations in the western United States provide a complimentary dataset of Quaternary fault slip rate estimates. These measurements may be integrated and interpreted using block models, in which the upper crust is divided into microplates bounded by mapped faults, with slip rates defined by the differential relative motions of adjacent microplates. The number and geometry of microplates are typically defined with boundaries representing a limited sub-set of the large number of potentially seismogenic faults. An alternative approach is to include large number of potentially active faults in a dense array of microplates, and then deterministically estimate the boundaries at which strain is localized, while simultaneously satisfying interseismic geodetic and geologic observations. This approach is possible through the application of total variation regularization (TVR) which simultaneously minimizes the L2 norm of data residuals and the L1 norm of the variation in the estimated state vector. Applied to three-dimensional spherical block models, TVR reduces the total variation between estimated rotation vectors, creating groups of microplates that rotate together as larger blocks, and localizing fault slip on the boundaries of these larger blocks. Here we consider a suite of block models containing 3-137 microplates, where active block boundaries have been determined by TVR optimization constrained by both interseismic GPS velocities and geologic slip rate estimates.

  16. A Doppler Transient Model Based on the Laplace Wavelet and Spectrum Correlation Assessment for Locomotive Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Changqing Shen

    2013-11-01

    Full Text Available The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.

  17. Modeling of fluid injection and withdrawal induced fault activation using discrete element based hydro-mechanical and dynamic coupled simulator

    Science.gov (United States)

    Yoon, Jeoung Seok; Zang, Arno; Zimmermann, Günter; Stephansson, Ove

    2016-04-01

    Operation of fluid injection into and withdrawal from the subsurface for various purposes has been known to induce earthquakes. Such operations include hydraulic fracturing for shale gas extraction, hydraulic stimulation for Enhanced Geothermal System development and waste water disposal. Among these, several damaging earthquakes have been reported in the USA in particular in the areas of high-rate massive amount of wastewater injection [1] mostly with natural fault systems. Oil and gas production have been known to induce earthquake where pore fluid pressure decreases in some cases by several tens of Mega Pascal. One recent seismic event occurred in November 2013 near Azle, Texas where a series of earthquakes began along a mapped ancient fault system [2]. It was studied that a combination of brine production and waste water injection near the fault generated subsurface pressures sufficient to induced earthquakes on near-critically stressed faults. This numerical study aims at investigating the occurrence mechanisms of such earthquakes induced by fluid injection [3] and withdrawal by using hydro-geomechanical coupled dynamic simulator (Itasca's Particle Flow Code 2D). Generic models are setup to investigate the sensitivity of several parameters which include fault orientation, frictional properties, distance from the injection well to the fault, amount of fluid withdrawal around the injection well, to the response of the fault systems and the activation magnitude. Fault slip movement over time in relation to the diffusion of pore pressure is analyzed in detail. Moreover, correlations between the spatial distribution of pore pressure change and the locations of induced seismic events and fault slip rate are investigated. References [1] Keranen KM, Weingarten M, Albers GA, Bekins BA, Ge S, 2014. Sharp increase in central Oklahoma seismicity since 2008 induced by massive wastewater injection, Science 345, 448, DOI: 10.1126/science.1255802. [2] Hornbach MJ, DeShon HR

  18. 后勤报修服务绩效分析%Performance Measurement Analysis of Logistical Fault Management

    Institute of Scientific and Technical Information of China (English)

    施鹏; 钱国涌; 俞之学; 陶力; 金磊; 张群仁; 汪昕

    2011-01-01

    Objective To analyze the performance of fault management in a comprehensive Grade 3 A hospital logistics management. Method Based on the hospital fault management information system, we collected the data of fault management for a surgical building from Aug. 1st to 31st in 2010, and analyzed the effectiveness, efficiency and cost of the response with a statistical descriptive method and activity - based cost method. Result The Median (P25, P7S) of the service call was 14 (8, 19). The rate of rework was 8%. The rate of work finished on the same day is 96. 2%. The response time of three different teams (Team A works for electricity and water supply service, team B works for air conditioners repair and team C works for beds repair) was 23 min, 39 min, 40 min, respectively;the cost was 52.7 RMB, 80.0 RMB, and 38.4 RMB, respectively. The labor cost accounts for 75% in team A and team C, 95% in team B, and else was material cost. Conclusion With the performance measurement analysis, we can improve the level of hospital logistics management in the field of fault management.%目的 对医院后勤报修服务进行绩效分析.方法通过后勤报修信息系统,采集2010年8月医院外科大楼医疗病区的报修业务数据,运用统计描述和作业成本法对任务完成的质量、效率和成本3个关键绩效指标(KPI)进行绩效分析.结果该调查对象在调查时段内每日报修数量中位数和75%百分位数为14(8,19)例.重复报修率为8%,报修当日完成率为96.2%,水电维修组、空调维修组、病床车辆维修组平均响应时间分别为23分钟、39分钟、40分钟,平均工单成本分别为52.7元、80.0元、38.4元,水电组和床修组人力成本与材料成本分别占75%和25%,空调组以人力成本为主,占95%.结论通过后勤信息化管理系统对后勤报修服务进行绩效评价,可以加强后勤报修服务管理,提高后勤管理的精细化和科学化水平.

  19. Safety in the event of an Internal fault: modelling or tests?

    Energy Technology Data Exchange (ETDEWEB)

    Duquerroy, P. [Electricte de France (France); Friberg, G.; Pietsch, G. [Aachen Univ. of Tech. (Germany); Herault, C.; Chevrier, P. [Schneider Electric (France)

    1997-12-31

    To facilitate the integration into the environment of public distribution MV/LV substations installed by EDF, their size has been divided by 5 since 1950. In addition to this trend there is a notable increase in the short-circuit power, to improve the quality of electricity, and a desire for constant improvement of the safety of persons and property. In this context, the development of new MV/LV substations and MV switchgear cubicles, and EDF`s aim to bring into general use and harmonise internal fault protections led us to carry out an investigation into internal arcing phenomena and their consequences. In collaboration with the Aachen university of Technologie RWHT and Schneider Electric, we created different forms of modelling which were validated by a series of power tests. This enabled us to gain a better understanding of the physical phenomena associated with internal faults, in particular the increase in pressure, in order to specify electrical devices and their type tests with maximum efficiency. (UK)

  20. Fault-tree Models of Accident Scenarios of RoPax Vessels

    Institute of Scientific and Technical Information of China (English)

    Pedro Ant(a)o; C. Guedes Soares

    2006-01-01

    Ro-Ro vessels for cargo and passengers (RoPax) are a relatively new concept that has proven to be popular in the Mediterranean region and is becoming more widespread in Northern Europe. Due to its design characteristics and amount of passengers, although less than a regular passenger liner, accidents with RoPax vessels have far reaching consequences both for economical and for human life. The objective of this paper is to identify hazards related to casualties of RoPax vessels. The terminal casualty events chosen are related to accident and incident statistics for this type of vessel. This paper focuses on the identification of the basic events that can lead to an accident and the performance requirements. The hazard identification is carried out as the first step of a Formal Safety Assessment (FSA) and the modelling of the relation between the relevant events is made using Fault Tree Analysis (FTA). The conclusions of this study are recommendations to the later steps of FSA rather than for decision making (Step 5 of FSA). These recommendations will be focused on the possible design shortcomings identified during the analysis by fault trees throughout cut sets. Also the role that human factors have is analysed through a sensitivity analysis where it is shown that their influence is higher for groundings and collisions where an increase of the initial probability leads to the change of almost 90% of the accident occurrence.

  1. Major Fault Patterns in Zanjan State of Iran Based of GECO Global Geoid Model

    Science.gov (United States)

    Beheshty, Sayyed Amir Hossein; Abrari Vajari, Mohammad; Raoufikelachayeh, SeyedehSusan

    2016-04-01

    A new Earth Gravitational Model (GECO) to degree 2190 has been developed incorporates EGM2008 and the latest GOCE based satellite solutions. Satellite gradiometry data are more sensitive information of the long- and medium- wavelengths of the gravity field than the conventional satellite tracking data. Hence, by utilizing this new technique, more accurate, reliable and higher degrees/orders of the spherical harmonic expansion of the gravity field can be achieved. Gravity gradients can also be useful in geophysical interpretation and prospecting. We have presented the concept of gravity gradients with some simple interpretations. A MATLAB based computer programs were developed and utilized for determining the gravity and gradient components of the gravity field using the GGMs, followed by a case study in Zanjan State of Iran. Our numerical studies show strong (more than 72%) correlations between gravity anomalies and the diagonal elements of the gradient tensor. Also, strong correlations were revealed between the components of the deflection of vertical and the off-diagonal elements as well as between the horizontal gradient and magnitude of the deflection of vertical. We clearly distinguished two big faults in North and South of Zanjan city based on the current information. Also, several minor faults were detected in the study area. Therefore, the same geophysical interpretation can be stated for gravity gradient components too. Our mathematical derivations support some of these correlations.

  2. Model Management in Virtual Reality Research

    Directory of Open Access Journals (Sweden)

    Jungang Xu

    2007-12-01

    Full Text Available In this paper, a Bi-angle Model Management method (BiMM is proposed to manage models in virtual reality research. One angle is based on the model itself, which includes the model, model scheme, and texture; another angle is based on the model sort - each sort has its child sorts except leaf nodes. Based on this method, we have developed a model management application that has the following major functions: model management, model sort management, model query, model statistics, model registration into database as a whole, etc. With this method, researchers can manage model data conveniently and efficiently.

  3. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    Directory of Open Access Journals (Sweden)

    Axel Waggershauser

    2008-11-01

    Full Text Available We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the use of image processing routines for robot control within the simulation. Specific error models for actuators, distance sensors, camera sensor, and wireless communication have been implemented. Progressively increasing error levels for an application program allows for testing and improving its robustness and fault-tolerance.

  4. Expert systems applied to fault isolation and energy storage management, phase 2

    Science.gov (United States)

    1987-01-01

    A user's guide for the Fault Isolation and Energy Storage (FIES) II system is provided. Included are a brief discussion of the background and scope of this project, a discussion of basic and advanced operating installation and problem determination procedures for the FIES II system and information on hardware and software design and implementation. A number of appendices are provided including a detailed specification for the microprocessor software, a detailed description of the expert system rule base and a description and listings of the LISP interface software.

  5. Comparative modeling of fault reactivation and seismicity in geologic carbon storage and shale-gas reservoir stimulation

    Science.gov (United States)

    Rutqvist, Jonny; Rinaldi, Antonio; Cappa, Frederic

    2016-04-01

    The potential for fault reactivation and induced seismicity are issues of concern related to both geologic CO2 sequestration and stimulation of shale-gas reservoirs. It is well known that underground injection may cause induced seismicity depending on site-specific conditions, such a stress and rock properties and injection parameters. To date no sizeable seismic event that could be felt by the local population has been documented associated with CO2 sequestration activities. In the case of shale-gas fracturing, only a few cases of felt seismicity have been documented out of hundreds of thousands of hydraulic fracturing stimulation stages. In this paper we summarize and review numerical simulations of injection-induced fault reactivation and induced seismicity associated with both underground CO2 injection and hydraulic fracturing of shale-gas reservoirs. The simulations were conducted with TOUGH-FLAC, a simulator for coupled multiphase flow and geomechanical modeling. In this case we employed both 2D and 3D models with an explicit representation of a fault. A strain softening Mohr-Coulomb model was used to model a slip-weakening fault slip behavior, enabling modeling of sudden slip that was interpreted as a seismic event, with a moment magnitude evaluated using formulas from seismology. In the case of CO2 sequestration, injection rates corresponding to expected industrial scale CO2 storage operations were used, raising the reservoir pressure until the fault was reactivated. For the assumed model settings, it took a few months of continuous injection to increase the reservoir pressure sufficiently to cause the fault to reactivate. In the case of shale-gas fracturing we considered that the injection fluid during one typical 3-hour fracturing stage was channelized into a fault along with the hydraulic fracturing process. Overall, the analysis shows that while the CO2 geologic sequestration in deep sedimentary formations are capable of producing notable events (e

  6. A Kinematic Fault Network Model of Crustal Deformation for California and Its Application to the Seismic Hazard Analysis

    Science.gov (United States)

    Zeng, Y.; Shen, Z.; Harmsen, S.; Petersen, M. D.

    2010-12-01

    We invert GPS observations to determine the slip rates on major faults in California based on a kinematic fault model of crustal deformation with geological slip rate constraints. Assuming an elastic half-space, we interpret secular surface deformation using a kinematic fault network model with each fault segment slipping beneath a locking depth. This model simulates both block-like deformation and elastic strain accumulation within each bounding block. Each fault segment is linked to its adjacent elements with slip continuity imposed at fault nodes or intersections. The GPS observations across California and its neighbors are obtained from the SCEC WGCEP project of California Crustal Motion Map version 1.0 and SCEC Crustal Motion Map 4.0. Our fault models are based on the SCEC UCERF 2.0 fault database, a previous southern California block model by Shen and Jackson, and the San Francisco Bay area block model by d’Alessio et al. Our inversion shows a slip rate ranging from 20 to 26 mm/yr for the northern San Andreas from the Santa Cruz Mountain to the Peninsula segment. Slip rates vary from 8 to 14 mm/yr along the Hayward to the Maacama segment, and from 17 to 6 mm/yr along the central Calaveras to West Napa. For the central California creeping section, we find a depth dependent slip rate with an average slip rate of 23 mm/yr across the upper 5 km and 30 mm/yr underneath. Slip rates range from 30 mm/yr along the Parkfield and central California creeping section of the San Andres to an average of 6 mm/yr on the San Bernardino Mountain segment. On the southern San Andreas, slip rates vary from 21 to 30 mm/yr from the Cochella Valley to the Imperial Valley, and from 7 to 16 mm/yr along the San Jacinto segments. The shortening rate across the greater Los Angeles region is consistent with the regional tectonics and crustal thickening in the area. We are now in the process of applying the result to seismic hazard evaluation. Overall the geodetic and geological derived

  7. Management models for the future

    DEFF Research Database (Denmark)

    Jonker, Jan; Eskildsen, Jacob Kjær

    2007-01-01

    In the last decades a growing number of generic management models (e.g. EFQM, INK, ISO 9000:2000) has emerged. All these models are based on the ambition to stipulate the road to conventional and contemporary forms of organizational excellence. Some of the models aim to do so with regard to one...... aspect of the company's operations such as processes; others are based on a holistic view of the organisation. This paper is based on a book project (2006-2007) entitled "Management Models for the Future" (Springer Verlag, Heidelberg - Germany) aiming to harvest twelve new company-based models from...... and inspiring set of models together with an analysis thus showing the building blocks of meaningful and applicable models. Knowledge does not simply lie around waiting to be picked up. It must be concisely carved out of a continuous stream of ongoing events in reality, perceived within a specific frame...

  8. Clustering diagnosis of rolling element bearing fault based on integrated Autoregressive/Autoregressive Conditional Heteroscedasticity model

    Science.gov (United States)

    Wang, Guofeng; Liu, Chang; Cui, Yinhu

    2012-09-01

    Feature extraction plays an important role in the clustering analysis. In this paper an integrated Autoregressive (AR)/Autoregressive Conditional Heteroscedasticity (ARCH) model is proposed to characterize the vibration signal and the model coefficients are adopted as feature vectors to realize clustering diagnosis of rolling element bearings. The main characteristic is that the AR item and ARCH item are interrelated with each other so that it can depict the excess kurtosis and volatility clustering information in the vibration signal more accurately in comparison with two-stage AR/ARCH model. To testify the correctness, four kinds of bearing signals are adopted for parametric modeling by using the integrated and two-stage AR/ARCH model. The variance analysis of the model coefficients shows that the integrated AR/ARCH model can get more concentrated distribution. Taking these coefficients as feature vectors, K means based clustering is utilized to realize the automatic classification of bearing fault status. The results show that the proposed method can get more accurate results in comparison with two-stage model and discrete wavelet decomposition.

  9. New insights on stress rotations from a forward regional model of the San Andreas fault system near its Big Bend in southern California

    Science.gov (United States)

    Fitzenz, D.D.; Miller, S.A.

    2004-01-01

    Understanding the stress field surrounding and driving active fault systems is an important component of mechanistic seismic hazard assessment. We develop and present results from a time-forward three-dimensional (3-D) model of the San Andreas fault system near its Big Bend in southern California. The model boundary conditions are assessed by comparing model and observed tectonic regimes. The model of earthquake generation along two fault segments is used to target measurable properties (e.g., stress orientations, heat flow) that may allow inferences on the stress state on the faults. It is a quasi-static model, where GPS-constrained tectonic loading drives faults modeled as mostly sealed viscoelastic bodies embedded in an elastic half-space subjected to compaction and shear creep. A transpressive tectonic regime develops southwest of the model bend as a result of the tectonic loading and migrates toward the bend because of fault slip. The strength of the model faults is assessed on the basis of stress orientations, stress drop, and overpressures, showing a departure in the behavior of 3-D finite faults compared to models of 1-D or homogeneous infinite faults. At a smaller scale, stress transfers from fault slip transiently induce significant perturbations in the local stress tensors (where the slip profile is very heterogeneous). These stress rotations disappear when subsequent model earthquakes smooth the slip profile. Maps of maximum absolute shear stress emphasize both that (1) future models should include a more continuous representation of the faults and (2) that hydrostatically pressured intact rock is very difficult to break when no material weakness is considered. Copyright 2004 by the American Geophysical Union.

  10. Modelling Hospital Materials Management Processes

    Directory of Open Access Journals (Sweden)

    Raffaele Iannone

    2013-06-01

    integrated and detailed analysis and description model for hospital materials management data and tasks, which is able to tackle information from patient requirements to usage, from replenishment requests to supplying and handling activities. The model takes account of medical risk reduction, traceability and streamlined processes perspectives. Second, the paper translates this information into a business process model and mathematical formalization.The study provides a useful guide to the various relevant technology‐related, management and business issues, laying the foundations of an efficient reengineering of the supply chain to reduce healthcare costs and improve the quality of care.

  11. Management models for the future

    DEFF Research Database (Denmark)

    Eskildsen, Jacob Kjær; van Pijkeren, Michel

    2009-01-01

    outline will be provided of each of the twelve business contributions in this volume. The experiences recorded in the following chapters are wide-ranging. They cover know-how with national quality award models; management models for fair trade, corporate social responsibility, organisational excellence......" theoretical framework that can be used to observe, create and assess a real life organizational 'situation' in order to make desired (future) improvements. We also argue that five common requirements can be used to appraise the applicability of a framework claiming to be a management model. Thereafter a brief...... and various aspects of an organisations' value chain. The volume makes available an intriguing journey into the application of management models in different organizational and environmental contexts - a great learning experience for anyone who undertakes it....

  12. Discrete element modeling of Martian pit crater formation in response to extensional fracturing and dilational normal faulting

    Science.gov (United States)

    Smart, Kevin J.; Wyrick, Danielle Y.; Ferrill, David A.

    2011-04-01

    Pit craters, circular to elliptical depressions that lack a raised rim or ejecta deposits, are common on the surface of Mars. Similar structures are also found on Earth, Venus, the Moon, and smaller planetary bodies, including some asteroids. While it is generally accepted that these pits form in response to material drainage into a subsurface void space, the primary mechanism(s) responsible for creating the void is a subject of debate. Previously proposed mechanisms include collapse into lave tubes, dike injection, extensional fracturing, and dilational normal faulting. In this study, we employ two-dimensional discrete element models to assess both extensional fracturing and dilational normal faulting as mechanisms for forming pit craters. We also examine the effect of mechanical stratigraphy (alternating strong and weak layers) and variation in regolith thickness on pit morphology. Our simulations indicate that both extensional fracturing and dilational normal faulting are viable mechanisms. Both mechanisms lead to generally convex (steepening downward) slope profiles; extensional fracturing results in generally symmetric pits, whereas dilational normal faulting produces strongly asymmetric geometries. Pit width is established early, whereas pit depth increases later in the deformation history. Inclusion of mechanical stratigraphy results in wider and deeper pits, particularly for the dilational normal faulting, and the presence of strong near-surface layers leads to pits with distinct edges as observed on Mars. The modeling results suggest that a thicker regolith leads to wider but shallower pits that are less distinct and may be more difficult to detect in areas of thick regolith.

  13. Understanding interaction of small repeating earthquakes through models of rate-and-state faults

    Science.gov (United States)

    Chen, T.; Lui, K.; Lapusta, N.

    2012-12-01

    Due to their short recurrence times and known locations, small repeating earthquakes are widely used to study earthquake physics. Some of the repeating sequences are located close to each other and appear to interact. For example, the "San Francisco" (SF) and "Los Angeles" (LA) repeating sequences, which are targets of the San Andreas Fault Observatory at Depth (SAFOD), have a lateral separation of less than 70 m. The LA events tend to occur within 24 hours after the SF events, suggesting a triggering effect. Our goal is to study interaction of repeating earthquakes in the framework of rate-and-state fault models, in which repeating earthquakes occur on velocity-weakening patches embedded into a larger velocity-strengthening fault area. Such models can reproduce behavior of isolated repeating earthquake sequences, in particular, the scaling of their moment versus recurrence time and the response to accelerated postseismic creep (Chen and Lapusta, 2009; Chen et al., 2010). Our studies of the interaction of seismic events on two patches show that a variety of interesting behaviors. As expected based on intuition prior studies (e.g., Kato, JGR, 2004; Kaneko et al., Nature Geoscience, 2010), the two patches behave independently when they are far apart and rupture together if they are next to each other. In the intermediate range of distances, we observe triggering effects, with ruptures on the two patches clustering in time, but also other patterns, including supercycles that alternate between events that rupture a single asperity and events that rupture both asperities at the same time. When triggering occurs, smaller events tend to trigger larger events, since the nucleation of smaller events tends to be more frequent. To overcome such a pattern, and have larger events trigger smaller events as observed for the SF-LA interaction, the patch for the smaller event needs to be of the order of the nucleation size, so that the smaller event has difficulty nucleating by

  14. Predictive permeability model of faults in crystalline rocks; verification by joint hydraulic factor (JH) obtained from water pressure tests

    Indian Academy of Sciences (India)

    Hamidreza Rostami Barani; Gholamreza Lashkaripour; Mohammad Ghafoori

    2014-08-01

    In the present study, a new model is proposed to predict the permeability per fracture in the fault zones by a new parameter named joint hydraulic factor (JH). JH is obtained from Water Pressure Test WPT) and modified by the degree of fracturing. The results of JH correspond with quantitative fault zone descriptions, qualitative fracture, and fault rock properties. In this respect, a case study was done based on the data collected from Seyahoo dam site located in the east of Iran to provide the permeability prediction model of fault zone structures. Datasets including scan-lines, drill cores, and water pressure tests in the terrain of Andesite and Basalt rocks were used to analyse the variability of in-site relative permeability of a range from fault zones to host rocks. The rock mass joint permeability quality, therefore, is defined by the JH. JH data analysis showed that the background sub-zone had commonly > 3 Lu (less of 5 × 10−5 m3/s) per fracture, whereas the fault core had permeability characteristics nearly as low as the outer damage zone, represented by 8 Lu (1.3 × 10−4 m3/s) per fracture, with occasional peaks towards 12 Lu (2 × 10−4 m3/s) per fracture. The maximum JH value belongs to the inner damage zone, marginal to the fault core, with 14–22 Lu (2.3 × 10−4 –3.6 × 10−4 m3/s) per fracture, locally exceeding 25 Lu (4.1 × 10−4 m3/s) per fracture. This gives a proportional relationship for JH approximately 1:4:2 between the fault core, inner damage zone, and outer damage zone of extensional fault zones in crystalline rocks. The results of the verification exercise revealed that the new approach would be efficient and that the JH parameter is a reliable scale for the fracture permeability change. It can be concluded that using short duration hydraulic tests (WPTs) and fracture frequency (FF) to calculate the JH parameter provides a possibility to describe a complex situation and compare, discuss, and weigh the hydraulic quality to make

  15. A fault diagnosis based reconfigurable longitudinal control system for managing loss of air data sensors for a civil aircraft

    OpenAIRE

    Varga, Andreas; Ossmann, Daniel; Joos, Hans-Dieter

    2014-01-01

    An integrated fault diagnosis based fault tolerant longitudinal control system architecture is proposed for civil aircraft which can accommodate partial or total losses of angle of attack and/or calibrated airspeed sensors. A triplex sensor redundancy is assumed for the normal operation of the aircraft using a gain scheduled longitudinal normal control law. The fault isolation functionality is provided by a bank of 6 fault detection filters, which individually monitor each of the 6 sensors us...

  16. Management models for the future

    DEFF Research Database (Denmark)

    Jonker, Jan; Eskildsen, Jacob Kjær

    2007-01-01

    In the last decades a growing number of generic management models (e.g. EFQM, INK, ISO 9000:2000) has emerged. All these models are based on the ambition to stipulate the road to conventional and contemporary forms of organizational excellence. Some of the models aim to do so with regard to one...... aspect of the company's operations such as processes; others are based on a holistic view of the organisation. This paper is based on a book project (2006-2007) entitled "Management Models for the Future" (Springer Verlag, Heidelberg - Germany) aiming to harvest twelve new company-based models from...... around the globe. Each of these models is described in a structured company-based story thus creating the backbone for the book at hand. The aim is to analyse these different kinds of institutional frameworks of excellence and discuss their nature, content and enactability. The result is a rich...

  17. Fault-Related CO2 Degassing, Geothermics, and Fluid Flow in Southern California Basins--Physiochemical Evidence and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Garven, Grant [Tufts University

    2015-08-11

    Our studies have had an important impact on societal issues. Experimental and field observations show that CO2 degassing, such as might occur from stored CO2 reservoir gas, can result in significant stable isotopic disequilibrium. In the offshore South Ellwood field of the Santa Barbara channel, we show how oil production has reduced natural seep rates in the area, thereby reducing greenhouse gases. Permeability is calculated to be ~20-30 millidarcys for km-scale fault-focused fluid flow, using changes in natural gas seepage rates from well production, and poroelastic changes in formation pore-water pressure. In the Los Angeles (LA) basin, our characterization of formation water chemistry, including stable isotopic studies, allows the distinction between deep and shallow formations waters. Our multiphase computational-based modeling of petroleum migration demonstrates the important role of major faults on geological-scale fluid migration in the LA basin, and show how petroleum was dammed up against the Newport-Inglewood fault zone in a “geologically fast” interval of time (less than 0.5 million years). Furthermore, these fluid studies also will allow evaluation of potential cross-formational mixing of formation fluids. Lastly, our new study of helium isotopes in the LA basin shows a significant leakage of mantle helium along the Newport Inglewood fault zone (NIFZ), at flow rates up to 2 cm/yr. Crustal-scale fault permeability (~60 microdarcys) and advective versus conductive heat transport rates have been estimated using the observed helium isotopic data. The NIFZ is an important deep-seated fault that may crosscut a proposed basin decollement fault in this heavily populated area, and appears to allow seepage of helium from the mantle sources about 30 km beneath Los Angeles. The helium study has been widely cited in recent weeks by the news media, both in radio and on numerous web sites.

  18. Revenue models in managed competition.

    Science.gov (United States)

    Mischler, N E

    1993-01-01

    As physicians and medical centers move into a changing reimbursement era, it is valuable for physician executives to have tools to help physicians understand the relationships among costs, revenues, and utilization. These relationships differ within the fee-for-service, prepaid, and managed fee-for-service revenue models. This article describes these different revenue models and highlights the benefits and issues associated with each model.

  19. Evaluation of chiller modeling approaches and their usability for fault detection

    Energy Technology Data Exchange (ETDEWEB)

    Sreedharan, Priya [Univ. of California, Berkeley, CA (United States)

    2001-05-01

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are the Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to

  20. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    Science.gov (United States)

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation.

  1. TSP断层模型数值模拟%Numerical Simulation of TSP Fault Model

    Institute of Scientific and Technical Information of China (English)

    林义; 刘争平; 王朝令; 肖缔

    2015-01-01

    隧道在施工开挖中会遇到各种地质问题,其中以断层和软弱带居多,目前隧道地质预报主要采用 TSP(tunnel seismic prediction)系统进行。虽然 TSP 技术应用广泛,但目前对它的研究工作主要集中于工程应用实例,采用正演模拟方法进行的研究较少。笔者采用有限元方法模拟隧道地震波场,采用波场快照与时间记录相结合的方法研究断层对隧道地震波场传播的影响,并对含断层模型的时间记录进行了反演处理,得到了数值模型的速度云图和反射层位图。数据处理结果表明:采用 TSP Win 软件默认值处理得到的速度云图与模型设定的断层位置一致;根据反射层位图,对异常速度带的层状模型来说,P 波预报的准确性更高。研究表明,TSP 系统具有良好的抗噪性能。通过对工程实例的处理,验证了数值模拟所得结论。%During tunnel excavation ,a variety of geological disasters might be encountered ,such as faults ,caves ,et .al . Tunnel seismic prediction (TSP ) is adopted to mitigate the possible damages . Although TSP technology is used widely ,the research about TSP is currently focused on its engineering application cases .We use the finite element method to simulate the tunnel seismic wave field ,employ wave field snapshots and time recording method on the impact of faults on the characteristics of the propagation of tunnel seismic wave field ,and inversely process the time record of model containing the fault .The digital model of the velocity scattered image and the reflection interface position are obtained , and the fault position from velocity scattered image processed with the default values set by using TSPwin is agreed to the one from the model .In respect to the layered model for an abnormal velocity zone ,P‐wave is more precise .The system of TSP is strong for its feature of anti‐noise .The numerical simulation is verified finally

  2. On-line Fault Diagnosis in Industrial Processes Using Variable Moving Window and Hidden Markov Model

    Institute of Scientific and Technical Information of China (English)

    周韶园; 谢磊; 王树青

    2005-01-01

    An integrated framework is presented to represent and classify process data for on-line identifying abnormal operating conditions. It is based on pattern recognition principles and consists of a feature extraction step, by which wavelet transform and principal component analysis are used to capture the inherent characteristics from process measurements, followed by a similarity assessment step using hidden Markov model (HMM) for pattern comparison. In most previous cases, a fixed-length moving window was employed to track dynamic data, and often failed to capture enough information for each fault and sometimes even deteriorated the diagnostic performance. A variable moving window, the length of which is modified with time, is introduced in this paper and case studies on the Tennessee Eastman process illustrate the potential of the proposed method.

  3. Fault-Tolerant Technique in the Cluster Computation of the Digital Watershed Model

    Institute of Scientific and Technical Information of China (English)

    SHANG Yizi; WU Baosheng; LI Tiejian; FANG Shenguang

    2007-01-01

    This paper describes a parallel computing platform using the existing facilities for the digital watershed model. In this paper, distributed multi-layered structure is applied to the computer cluster system, and the MPI-2 is adopted as a mature parallel programming standard. An agent is introduced which makes it possible to be multi-level fault-tolerant in software development. The communication protocol based on checkpointing and rollback recovery mechanism can realize the transaction reprocessing. Compared with conventional platform, the new system is able to make better use of the computing resource. Experimental results show the speedup ratio of the platform is almost 4 times as that of the conventional one, which demonstrates the high efficiency and good performance of the new approach.

  4. Analogue Modeling of Oblique Convergent Strike-Slip Faulting and Application to The Seram Island, Eastern Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2014-12-01

    Full Text Available DOI:10.17014/ijog.v1i3.189Sandbox experiment is one of the types of analogue modeling in geological sciences in which the main purpose is simulating deformation style and structural evolution of the sedimentary basin.  Sandbox modeling is one of the effective ways in conducting physically modeling and evaluates complex deformation of sedimentary rocks. The main purpose of this paper is to evaluate structural geometry and deformation history of oblique convergent deformation using of integrated technique of analogue sandbox modeling applying to deformation of Seram Fold-Thrust-Belt (SFTB in the Seram Island, Eastern Indonesia. Oblique convergent strike-slip deformation has notoriously generated area with structural complex geometry and pattern resulted from role of various local parameters that control stress distributions. Therefore, a special technique is needed for understanding and solving such problem in particular to relate 3D fault geometry and its evolution. The result of four case (Case 1 to 4 modeling setting indicated that two of modeling variables clearly affected in our sandbox modeling results; these are lithological variation (mainly stratigraphy of Seram Island and pre-existing basement fault geometry (basement configuration. Lithological variation was mainly affected in the total number of faults development.  On the other hand, pre-existing basement fault geometry was highly influenced in the end results particularly fault style and pattern as demonstrated in Case 4 modeling.  In addition, this study concluded that deformation in the Seram Island is clearly best described using oblique convergent strike-slip (transpression stress system.

  5. Analogue Modeling of Oblique Convergent Strike-Slip Faulting and Application to The Seram Island, Eastern Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2014-12-01

    Full Text Available DOI:10.17014/ijog.v1i3.189Sandbox experiment is one of the types of analogue modeling in geological sciences in which the main purpose is simulating deformation style and structural evolution of the sedimentary basin.  Sandbox modeling is one of the effective ways in conducting physically modeling and evaluates complex deformation of sedimentary rocks. The main purpose of this paper is to evaluate structural geometry and deformation history of oblique convergent deformation using of integrated technique of analogue sandbox modeling applying to deformation of Seram Fold-Thrust-Belt (SFTB in the Seram Island, Eastern Indonesia. Oblique convergent strike-slip deformation has notoriously generated area with structural complex geometry and pattern resulted from role of various local parameters that control stress distributions. Therefore, a special technique is needed for understanding and solving such problem in particular to relate 3D fault geometry and its evolution. The result of four case (Case 1 to 4 modeling setting indicated that two of modeling variables clearly affected in our sandbox modeling results; these are lithological variation (mainly stratigraphy of Seram Island and pre-existing basement fault geometry (basement configuration. Lithological variation was mainly affected in the total number of faults development.  On the other hand, pre-existing basement fault geometry was highly influenced in the end results particularly fault style and pattern as demonstrated in Case 4 modeling.  In addition, this study concluded that deformation in the Seram Island is clearly best described using oblique convergent strike-slip (transpression stress system.

  6. Causal Models for Risk Management

    Directory of Open Access Journals (Sweden)

    Neysis Hernández Díaz

    2013-12-01

    Full Text Available In this work a study about the process of risk management in major schools in the world. The project management tools worldwide highlights the need to redefine risk management processes. From the information obtained it is proposed the use of causal models for risk analysis based on information from the project or company, say risks and the influence thereof on the costs, human capital and project requirements and detect the damages of a number of tasks without tribute to the development of the project. A study on the use of causal models as knowledge representation techniques causal, among which are the Fuzzy Cognitive Maps (DCM and Bayesian networks, with the most favorable MCD technique to use because it allows modeling the risk information witho ut having a knowledge base either itemize.

  7. The seismogenic Gole Larghe Fault Zone (Italian Southern Alps): quantitative 3D characterization of the fault/fracture network, mapping of evidences of fluid-rock interaction, and modelling of the hydraulic structure through the seismic cycle

    Science.gov (United States)

    Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.

    2016-12-01

    The Gole Larghe Fault Zone (GLFZ) was exhumed from 8 km depth, where it was characterized by seismic activity (pseudotachylytes) and hydrous fluid flow (alteration halos and precipitation of hydrothermal minerals in veins and cataclasites). Thanks to glacier-polished outcrops exposing the 400 m-thick fault zone over a continuous area > 1.5 km2, the fault zone architecture has been quantitatively described with an unprecedented detail, providing a rich dataset to generate 3D Discrete Fracture Network (DFN) models and simulate the fault zone hydraulic properties. The fault and fracture network has been characterized combining > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed obtaining robust probability density functions for parameters of fault and fracture sets: orientation, fracture intensity and density, spacing, persistency, length, thickness/aperture, termination. The spatial distribution of fractures (random, clustered, anticlustered…) has been characterized with geostatistics. Evidences of fluid/rock interaction (alteration halos, hydrothermal veins, etc.) have been mapped on the same outcrops, revealing sectors of the fault zone strongly impacted, vs. completely unaffected, by fluid/rock interaction, separated by convolute infiltration fronts. Field and microstructural evidence revealed that higher permeability was obtained in the syn- to early post-seismic period, when fractures were (re)opened by off-fault deformation. We have developed a parametric hydraulic model of the GLFZ and calibrated it, varying the fraction of faults/fractures that were open in the post-seismic, with the goal of obtaining realistic fluid flow and permeability values, and a flow pattern consistent with the observed alteration/mineralization pattern. The fraction of open fractures is very close to the percolation threshold of the DFN, and the permeability tensor is strongly anisotropic

  8. A 3-D velocity model for earthquake location from combined geological and geophysical data: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    Science.gov (United States)

    Latorre, Diana; Lupattelli, Andrea; Mirabella, Francesco; Trippetta, Fabio; Valoroso, Luisa; Lomax, Anthony; Di Stefano, Raffaele; Collettini, Cristiano; Chiaraluce, Lauro

    2014-05-01

    Accurate hypocenter location at the crustal scale strongly depends on our knowledge of the 3D velocity structure. The integration of geological and geophysical data, when available, should contribute to a reliable seismic velocity model in order to guarantee high quality earthquake locations as well as their consistency with the geological structure. Here we present a 3D, P- and S-wave velocity model of the Upper Tiber valley region (Northern Apennines) retrieved by combining an extremely robust dataset of surface and sub-surface geological data (seismic reflection profiles and boreholes), in situ and laboratory velocity measurements, and earthquake data. The study area is a portion of the Apennine belt undergoing active extension where a set of high-angle normal faults is detached on the Altotiberina low-angle normal fault (ATF). From 2010, this area hosts a scientific infrastructure (the Alto Tiberina Near Fault Observatory, TABOO; http://taboo.rm.ingv.it/), consisting of a dense array of multi-sensor stations, devoted to studying the earthquakes preparatory phase and the deformation processes along the ATF fault system. The proposed 3D velocity model is a layered model in which irregular shaped surfaces limit the boundaries between main lithological units. The model has been constructed by interpolating depth converted seismic horizons interpreted along 40 seismic reflection profiles (down to 4s two way travel times) that have been calibrated with 6 deep boreholes (down to 5 km depth) and constrained by detailed geological maps and structural surveys data. The layers of the model are characterized by similar rock types and seismic velocity properties. The P- and S-waves velocities for each layer have been derived from velocity measurements coming from both boreholes (sonic logs) and laboratory, where measurements have been performed on analogue natural samples increasing confining pressure in order to simulate crustal conditions. In order to test the 3D velocity

  9. Reliability Growth Modeling and Optimal Release Policy Under Fuzzy Environment of an N-version Programming System Incorporating the Effect of Fault Removal Efficiency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Failure of a safety critical system can lead to big losses. Very high software reliability is required for automating the working of systems such as aircraft controller and nuclear reactor controller software systems. Fault-tolerant softwares are used to increase the overall reliability of software systems. Fault tolerance is achieved using the fault-tolerant schemes such as fault recovery (recovery block scheme), fault masking (N-version programming (NVP)) or a combination of both (Hybrid scheme). These softwares incorporate the ability of system survival even on a failure. Many researchers in the field of software engineering have done excellent work to study the reliability of fault-tolerant systems. Most of them consider the stable system reliability. Few attempts have been made in reliability modeling to study the reliability growth for an NVP system. Recently, a model was proposed to analyze the reliability growth of an NVP system incorporating the effect of fault removal efficiency. In this model, a proportion of the number of failures is assumed to be a measure of fault generation while an appropriate measure of fault generation should be the proportion of faults removed. In this paper, we first propose a testing efficiency model incorporating the effect of imperfect fault debugging and error generation. Using this model, a software reliability growth model (SRGM) is developed to model the reliability growth of an NVP system. The proposed model is useful for practical applications and can provide the measures of debugging effectiveness and additional workload or skilled professional required. It is very important for a developer to determine the optimal release time of the software to improve its performance in terms of competition and cost. In this paper, we also formulate the optimal software release time problem for a 3VP system under fuzzy environment and discuss a the fuzzy optimization technique for solving the problem with a numerical illustration.

  10. Product Knowledge Modelling and Management

    DEFF Research Database (Denmark)

    Zhang, Y.; MacCallum, K. J.; Duffy, Alex

    1996-01-01

    The term, Product Knowledge is used to refer to two related but distinct concepts; the knowledge of a specific product (Specific Product Knowledge) and the knowledge of a product domain (Product Domain Knowledge). Modelling and managing Product Knowlege is an essential part of carrying out design...... function-oriented design. Both Specific Product Knowledge and Product Domain Knowledge are modelled at two levels, a meta-model and an information-level.Following that, a computer-based scheme to manage the proposed product lknowledge models within a dynamically changing environment is presented........A scheme is presented in this paper to model, i.e. classify, structure and formalise the product knowledge for the purpose of supporting function-oriented design. The product design specification and four types of required attributes of a specific product have been identified to form the Specific