WorldWideScience

Sample records for fault model aliasing

  1. Method of anti-aliasing with the use of the new pixel model

    Science.gov (United States)

    Romanyuk, Olexander N.; Pavlov, Sergii V.; Melnyk, Olexander V.; Romanyuk, Sergii O.; Smolarz, Andrzej; Bazarova, Madina

    2015-12-01

    The paper proposes additional evaluation functions to mark the segment area that cuts straight line to determine the intensity of the color pixel. For anti-aliasing purposes a twelve-angle pixel model is suggested. Additional evaluation functions are used to identify the pixel color intensity. These functions can be calculated independently. A structure of a device is proposed for hardware implementation of anti-aliasing.

  2. Modeling of Present-Day Atmosphere and Ocean Non-Tidal De-Aliasing Errors for Future Gravity Mission Simulations

    Science.gov (United States)

    Bergmann-Wolf, I.; Dobslaw, H.; Mayer-Gürr, T.

    2015-12-01

    A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency (Dobslaw et al., 2015) is now available for the years 1995 -- 2006. The data-set contains realizations of (i) errors at large spatial scales assessed individually for periods between 10 -- 30, 3 -- 10, and 1 -- 3 days, the S1 atmospheric tide, and sub-diurnal periods; (ii) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (iii) errors due to physical processes not represented in currently available de-aliasing products. The error magnitudes for each of the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. In order to demonstrate the plausibility of the error magnitudes chosen, we perform a variance component estimation based on daily GRACE normal equations from the ITSG-Grace2014 global gravity field series recently published by the University of Graz. All 12 years of the error model are used to calculate empirical error variance-covariance matrices describing the systematic dependencies of the errors both in time and in space individually for five continental and four oceanic regions, and daily GRACE normal equations are subsequently employed to obtain pre-factors for each of those matrices. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, errors prepared for the updated ESM are found to be largely consistent with noise of a similar stochastic character contained in present-day GRACE solutions. Differences and similarities identified for all of the nine regions considered will be discussed in detail during the presentation.Dobslaw, H., I. Bergmann-Wolf, R. Dill, E. Forootan, V. Klemann, J. Kusche, and I. Sasgen (2015), The updated ESA Earth System Model for future gravity mission simulation studies, J. Geod., doi:10.1007/s00190-014-0787-8.

  3. Aliasing as noise - A quantitative and qualitative assessment

    Science.gov (United States)

    Park, Stephen K.; Hazra, Rajeeb

    1993-01-01

    We present a model-based argument that, for the purposes of system design and digital image processing, aliasing should be treated as signal-dependent additive noise. By using a computational simulation based on this model, we process (high resolution images of) natural scenes in a way which enables the 'aliased component' of the reconstructed image to be isolated unambiguously. We demonstrate that our model-based argument leads naturally to system design metrics which quantify the extent of aliasing. And, by illustrating several aliased component images, we provide a qualitative assessment of aliasing as noise.

  4. Audible Aliasing Distortion in Digital Audio Synthesis

    Directory of Open Access Journals (Sweden)

    J. Schimmel

    2012-04-01

    Full Text Available This paper deals with aliasing distortion in digital audio signal synthesis of classic periodic waveforms with infinite Fourier series, for electronic musical instruments. When these waveforms are generated in the digital domain then the aliasing appears due to its unlimited bandwidth. There are several techniques for the synthesis of these signals that have been designed to avoid or reduce the aliasing distortion. However, these techniques have high computing demands. One can say that today's computers have enough computing power to use these methods. However, we have to realize that today’s computer-aided music production requires tens of multi-timbre voices generated simultaneously by software synthesizers and the most of the computing power must be reserved for hard-disc recording subsystem and real-time audio processing of many audio channels with a lot of audio effects. Trivially generated classic analog synthesizer waveforms are therefore still effective for sound synthesis. We cannot avoid the aliasing distortion but spectral components produced by the aliasing can be masked with harmonic components and thus made inaudible if sufficient oversampling ratio is used. This paper deals with the assessment of audible aliasing distortion with the help of a psychoacoustic model of simultaneous masking and compares the computing demands of trivial generation using oversampling with those of other methods.

  5. De-Aliasing in Satellite Gravimetry - revisited

    Science.gov (United States)

    Murböck, Michael; Gruber, Thomas; Daras, Ilias; Pail, Roland

    2016-04-01

    Temporal Aliasing of high-frequent signals is a dominating error source in satellite gravimetry. The reduction of temporal aliasing errors is required in order to be sensitive to the global time variable gravity signal dominated by continental hydrology and ice. Within the gravity field processing of the Gravity Recovery And Climate Experiment (GRACE) and its follow-on mission temporal aliasing errors are often reduced by subtracting atmospheric and oceanic induced high-frequent signal contents on observation level. This is done by using independent tidal and non-tidal models. Hereafter this is called the classical de-aliasing approach. In addition to the classical approach the co-estimation of high-frequent low resolution gravity field parameters proposed by Wiese et al. (2011) further reduces temporal aliasing errors. In order to be independent of external models the optimum de-aliasing procedure for temporal gravity retrieval would be without the classical. Then the philosophy also changes. The high-frequent signal contents are not reduced any more but they have to be observed. To improve the observability of high-frequent signals on a global scale a second pair is required in addition to a single GRACE-like pair. On the one hand the observation geometry is improved by having a second pair in an inclined orbit besides the polar pair. On the other hand this automatically doubles the temporal resolution of the retrieved gravity fields. In this study we discuss the different aspects of classical de-aliasing versus observation of high-frequent signals based on full-scale closed-loop simulations. We focus on the analysis and validation of high-frequent gravity field estimates from single and double pairs and the capability of this approach to replace the classical de-aliasing approach at least for the non-tidal part. Wiese D, Visser P, Nerem R (2011) Estimating low resolution gravity fields at short time intervals to reduce temporal aliasing errors. Advances in Space

  6. Aliasing-tolerant color Doppler quantification of regurgitant jets.

    Science.gov (United States)

    Stewart, S F

    1998-07-01

    Conservation of momentum transfer in regurgitant cardiac jets can be used to calculate the flow rate from color Doppler velocities. In this study, turbulent jets were simulated by finite elements; pseudocolor Doppler images were interpolated from the computations, with aliasing introduced artificially. Jets were also imaged by color Doppler in an in vitro flow system. To suppress aliasing errors, jet velocities were fitted iteratively to a fluid mechanical model constrained to match the orifice velocity (measured without aliasing by continuous-wave Doppler). At each iteration, the model was used to detect aliased velocities, which were excluded during the next iteration. Iteration continued until the flow rate calculated by the model and number of calculated nonaliased pixels were unchanged. The good correlations between measured and calculated flow rates in the experimental (R2 = 0.933) and computational studies (R2 = 0.990) suggest that this may be a clinically useful approach even in aliased images. Published by Elsevier Science Inc.

  7. Synchronous Pendulums and Aliasing

    Directory of Open Access Journals (Sweden)

    Jia-Cherng Chong

    2014-12-01

    Full Text Available We present the construction, mathematical relation and simulation of a 16-bob pendulum wave machine (PWM. Following small angle approximation, the PWM is a useful teaching aid to demonstrate free oscillations and periodic motion; it can also illustrate effects of aliasing in physics and engineering. The duration of pattern cycle is adjusted with a set of pendulum lengths that are dictated by a non-linear function followed by the outline of the PWM design aspects. Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

  8. Correcting interperspective aliasing in autostereoscopic displays.

    Science.gov (United States)

    Moller, Christian N; Travis, Adrian R L

    2005-01-01

    An image presented on an autostereoscopic system should not contain discontinuities between adjacent views. A viewer should experience a continuous scene when moving from one view to the next. If corresponding points in two perspectives do not spatially abut, a viewer will experience jumps in the scene. This is known as interperspective aliasing. Interperspective aliasing is caused by object features far away from the stereoscopic screen being too small, which results in visual artifacts. By modeling a 3D point as a defocused image point, we can adapt Fourier analysis to devise a depth-dependent filter kernel that allows filtering of a stereoscopic 3D image. For synthetic 3D data, we use a simpler approach, which is to smear the data by a distance proportional to its depth.

  9. Mathematical modelling on instability of shear fault

    Institute of Scientific and Technical Information of China (English)

    范天佑

    1996-01-01

    A study on mathematical modelling on instability of fault is reported.The fracture mechanics and fracture dynamics as a basis of the discussion,and the method of complex variable function (including the conformal mapping and approximate conformal mapping) are employed,and some analytic solutions of the problem in closed form are found.The fault body concept is emphasized and the characteristic size of fault body is introduced.The effect of finite size of the fault body and the effect of the fault propagating speed (especially the effect of the high speed) and their influence on the fault instability are discussed.These results further explain the low-stress drop phenomena observed in earthquake source.

  10. Adaptive Modeling for Security Infrastructure Fault Response

    Institute of Scientific and Technical Information of China (English)

    CUI Zhong-jie; YAO Shu-ping; HU Chang-zhen

    2008-01-01

    Based on the analysis of inherent limitations in existing security response decision-making systems, a dynamic adaptive model of fault response is presented. Several security fault levels were founded, which comprise the basic level, equipment level and mechanism level. Fault damage cost is calculated using the analytic hierarchy process. Meanwhile, the model evaluates the impact of different responses upon fault repair and normal operation. Response operation cost and response negative cost are introduced through quantitative calculation. This model adopts a comprehensive response decision of security fault in three principles-the maximum and minimum principle, timeliness principle, acquiescence principle, which assure optimal response countermeasure is selected for different situations. Experimental results show that the proposed model has good self-adaptation ability, timeliness and cost-sensitiveness.

  11. Workflow Fault Tree Generation Through Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2014-01-01

    We present a framework for the automated generation of fault trees from models of realworld process workflows, expressed in a formalised subset of the popular Business Process Modelling and Notation (BPMN) language. To capture uncertainty and unreliability in workflows, we extend this formalism...... of the system being modelled. From these calculations, a comprehensive fault tree is generated. Further, we show that annotating the model with rewards (data) allows the expected mean values of reward structures to be calculated at points of failure....

  12. Modeling fault among motorcyclists involved in crashes.

    Science.gov (United States)

    Haque, Md Mazharul; Chin, Hoong Chor; Huang, Helai

    2009-03-01

    Singapore crash statistics from 2001 to 2006 show that the motorcyclist fatality and injury rates per registered vehicle are higher than those of other motor vehicles by 13 and 7 times, respectively. The crash involvement rate of motorcyclists as victims of other road users is also about 43%. The objective of this study is to identify the factors that contribute to the fault of motorcyclists involved in crashes. This is done by using the binary logit model to differentiate between at-fault and not-at-fault cases and the analysis is further categorized by the location of the crashes, i.e., at intersections, on expressways and at non-intersections. A number of explanatory variables representing roadway characteristics, environmental factors, motorcycle descriptions, and rider demographics have been evaluated. Time trend effect shows that not-at-fault crash involvement of motorcyclists has increased with time. The likelihood of night time crashes has also increased for not-at-fault crashes at intersections and expressways. The presence of surveillance cameras is effective in reducing not-at-fault crashes at intersections. Wet-road surfaces increase at-fault crash involvement at non-intersections. At intersections, not-at-fault crash involvement is more likely on single-lane roads or on median lane of multi-lane roads, while on expressways at-fault crash involvement is more likely on the median lane. Roads with higher speed limit have higher at-fault crash involvement and this is also true on expressways. Motorcycles with pillion passengers or with higher engine capacity have higher likelihood of being at-fault in crashes on expressways. Motorcyclists are more likely to be at-fault in collisions involving pedestrians and this effect is higher at night. In multi-vehicle crashes, motorcyclists are more likely to be victims than at-fault. Young and older riders are more likely to be at-fault in crashes than middle-aged group of riders. The findings of this study will help

  13. SDEM modelling of fault-propagation folding

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Poulsen, Jane Bang;

    2009-01-01

    -propagation-folding has already been the topic of a large number of empirical studies as well as physical and computational model experiments. However, with the newly developed Stress-based Discrete Element Method (SDEM), we have, for the first time, explored computationally the link between self-emerging fault patterns...... and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset...... on the master fault. The SDEM modelling enables us to evaluate quantitatively the rate of strain . A high strain rate and a step gradient indicate the presence of an active fault, whereas a low strain-rate and low gradient indicates no or very low deformation intensity. The strain-rate evolution thus gives...

  14. Mechanical Models of Fault-Related Folding

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  15. Anti-Aliased Rendering of Water Surface

    Institute of Scientific and Technical Information of China (English)

    Xue-Ying Qin; Eihachiro Nakamae; Wei Hua; Yasuo Nagai; Qun-Sheng Peng

    2004-01-01

    Water surface is one of the most important components of landscape scenes. When rendering spacious far from the viewpoint. This is because water surface consists of stochastic water waves which are usually modeled by periodic bump mapping. The incident rays on the water surface are actually scattered by the bumped waves,pattern, we estimate this solid angle of reflected rays and trace these rays. An image-based accelerating method is adopted so that the contribution of each reflected ray can be quickly obtained without elaborate intersection calculation. We also demonstrate anti-aliased shadows of sunlight and skylight on the water surface. Both the rendered images and animations show excellent effects on the water surface of a reservoir.

  16. Fault Detection under Fuzzy Model Uncertainty

    Institute of Scientific and Technical Information of China (English)

    Marek Kowal; Józef Korbicz

    2007-01-01

    The paper tackles the problem of robust fault detection using Takagi-Sugeno fuzzy models. A model-based strategy is employed to generate residuals in order to make a decision about the state of the process. Unfortunately, such a method is corrupted by model uncertainty due to the fact that in real applications there exists a model-reality mismatch. In order to ensure reliable fault detection the adaptive threshold technique is used to deal with the mentioned problem. The paper focuses also on fuzzy model design procedure. The bounded-error approach is applied to generating the rules for the model using available measurements. The proposed approach is applied to fault detection in the DC laboratory engine.

  17. Simple model of stacking-fault energies

    DEFF Research Database (Denmark)

    Stokbro, Kurt; Jacobsen, Lærke Wedel

    1993-01-01

    -density calculations of stacking-fault energies, and gives a simple way of understanding the calculated energy contributions from the different atomic layers in the stacking-fault region. The two parameters in the model describe the relative energy contributions of the s and d electrons in the noble and transition......A simple model for the energetics of stacking faults in fcc metals is constructed. The model contains third-nearest-neighbor pairwise interactions and a term involving the fourth moment of the electronic density of states. The model is in excellent agreement with recently published local...... metals, and thereby explain the pronounced differences in energetics in these two classes of metals. The model is discussed in the framework of the effective-medium theory where it is possible to find a functional form for the pair potential and relate the contribution associated with the fourth moment...

  18. Modeling Fluid Flow in Faulted Basins

    Directory of Open Access Journals (Sweden)

    Faille I.

    2014-07-01

    Full Text Available This paper presents a basin simulator designed to better take faults into account, either as conduits or as barriers to fluid flow. It computes hydrocarbon generation, fluid flow and heat transfer on the 4D (space and time geometry obtained by 3D volume restoration. Contrary to classical basin simulators, this calculator does not require a structured mesh based on vertical pillars nor a multi-block structure associated to the fault network. The mesh follows the sediments during the evolution of the basin. It deforms continuously with respect to time to account for sedimentation, erosion, compaction and kinematic displacements. The simulation domain is structured in layers, in order to handle properly the corresponding heterogeneities and to follow the sedimentation processes (thickening of the layers. In each layer, the mesh is unstructured: it may include several types of cells such as tetrahedra, hexahedra, pyramid, prism, etc. However, a mesh composed mainly of hexahedra is preferred as they are well suited to the layered structure of the basin. Faults are handled as internal boundaries across which the mesh is non-matching. Different models are proposed for fault behavior such as impervious fault, flow across fault or conductive fault. The calculator is based on a cell centered Finite Volume discretisation, which ensures conservation of physical quantities (mass of fluid, heat at a discrete level and which accounts properly for heterogeneities. The numerical scheme handles the non matching meshes and guaranties appropriate connection of cells across faults. Results on a synthetic basin demonstrate the capabilities of this new simulator.

  19. FSN-based fault modelling for fault detection and troubleshooting in CANDU stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, E., E-mail: elnara.nasimi@brucepower.com [Bruce Power LLP., Tiverton, Ontario(Canada); Gabbar, H.A. [Univ. of Ontario Inst. of Tech., Oshawa, Ontario (Canada)

    2013-07-01

    An accurate fault modeling and troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities of current and future generation of CANDU designs. This paper presents fault modeling approach using Fault Semantic Network (FSN) methodology with risk estimation. Its application is demonstrated using a case study of Bruce B zone-control level oscillations. (author)

  20. Stator Fault Modelling of Induction Motors

    DEFF Research Database (Denmark)

    Thomsen, Jesper Sandberg; Kallesøe, Carsten

    2006-01-01

    measurements from a specially designed induction motor. With this motor it is possible to simulate both terminal disconnections, inter-turn and turn-turn short circuits. The results show good agreement between the measurements and the simulated signals obtained from the model. In the tests focus......In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real...... is on the phase currents and the star point voltage as these signals are often used for fault detection....

  1. Spatial aliasing and distortion of energy distribution in the wave vector domain under multi-spacecraft measurements

    Directory of Open Access Journals (Sweden)

    Y. Narita

    2009-08-01

    Full Text Available Aliasing is a general problem in the analysis of any measurements that make sampling at discrete points. Sampling in the spatial domain results in a periodic pattern of spectra in the wave vector domain. This effect is called spatial aliasing, and it is of particular importance for multi-spacecraft measurements in space. We first present the theoretical background of aliasing problems in the frequency domain and generalize it to the wave vector domain, and then present model calculations of spatial aliasing. The model calculations are performed for various configurations of the reciprocal vectors and energy spectra or distribution that are placed at different positions in the wave vector domain, and exhibit two effects on aliasing. One is weak aliasing, in which the true spectrum is distorted because of non-uniform aliasing contributions in the Brillouin zone. It is demonstrated that the energy distribution becomes elongated in the shortest reciprocal lattice vector direction in the wave vector domain. The other effect is strong aliasing, in which aliases have a significant contribution in the Brillouin zone and the energy distribution shows a false peak. These results give a caveat in multi-spacecraft data analysis in that spectral anisotropy obtained by a measurement has in general two origins: (1 natural and physical origins like anisotropy imposed by a mean magnetic field or a flow direction; and (2 aliasing effects that are imposed by the configuration of the measurement array (or the set of reciprocal vectors. This manuscript also discusses a possible method to estimate aliasing contributions in the Brillouin zone based on the measured spectrum and to correct the spectra for aliasing.

  2. Nonlinear Model-Based Fault Detection for a Hydraulic Actuator

    NARCIS (Netherlands)

    Van Eykeren, L.; Chu, Q.P.

    2011-01-01

    This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost effect

  3. Guideliness for system modeling: fault tree [analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

  4. Superresolution images reconstructed from aliased images

    Science.gov (United States)

    Vandewalle, Patrick; Susstrunk, Sabine E.; Vetterli, Martin

    2003-06-01

    In this paper, we present a simple method to almost quadruple the spatial resolution of aliased images. From a set of four low resolution, undersampled and shifted images, a new image is constructed with almost twice the resolution in each dimension. The resulting image is aliasing-free. A small aliasing-free part of the frequency domain of the images is used to compute the exact subpixel shifts. When the relative image positions are known, a higher resolution image can be constructed using the Papoulis-Gerchberg algorithm. The proposed method is tested in a simulation where all simulation parameters are well controlled, and where the resulting image can be compared with its original. The algorithm is also applied to real, noisy images from a digital camera. Both experiments show very good results.

  5. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  6. Physiochemical Evidence of Faulting Processes and Modeling of Fluid in Evolving Fault Systems in Southern California

    Energy Technology Data Exchange (ETDEWEB)

    Boles, James [Professor

    2013-05-24

    Our study targets recent (Plio-Pleistocene) faults and young (Tertiary) petroleum fields in southern California. Faults include the Refugio Fault in the Transverse Ranges, the Ellwood Fault in the Santa Barbara Channel, and most recently the Newport- Inglewood in the Los Angeles Basin. Subsurface core and tubing scale samples, outcrop samples, well logs, reservoir properties, pore pressures, fluid compositions, and published structural-seismic sections have been used to characterize the tectonic/diagenetic history of the faults. As part of the effort to understand the diagenetic processes within these fault zones, we have studied analogous processes of rapid carbonate precipitation (scaling) in petroleum reservoir tubing and manmade tunnels. From this, we have identified geochemical signatures in carbonate that characterize rapid CO2 degassing. These data provide constraints for finite element models that predict fluid pressures, multiphase flow patterns, rates and patterns of deformation, subsurface temperatures and heat flow, and geochemistry associated with large fault systems.

  7. Completing fault models for abductive diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E. [Los Alamos National Lab., NM (United States); Cox, P.T.; Pietrzykowski, T. [Technical Univ., NS (Canada)

    1992-11-05

    In logic-based diagnosis, the consistency-based method is used to determine the possible sets of faulty devices. If the fault models of the devices are incomplete or nondeterministic, then this method does not necessarily yield abductive explanations of system behavior. Such explanations give additional information about faulty behavior and can be used for prediction. Unfortunately, system descriptions for the consistency-based method are often not suitable for abductive diagnosis. Methods for completing the fault models for abductive diagnosis have been suggested informally by Poole and by Cox et al. Here we formalize these methods by introducing a standard form for system descriptions. The properties of these methods are determined in relation to consistency-based diagnosis and compared to other ideas for integrating consistency-based and abductive diagnosis.

  8. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    Science.gov (United States)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  9. Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields.

    Science.gov (United States)

    Xiao, Zhaolin; Wang, Qing; Zhou, Guoqing; Yu, Jingyi

    2017-05-01

    When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.

  10. Faults

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  11. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  12. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  13. A Fault-Cored Anticline Boundary Element Model Incorporating the Combined Fault Slip and Buckling Mechanisms

    Directory of Open Access Journals (Sweden)

    Wen-Jeng Huang

    2016-02-01

    Full Text Available We develop a folding boundary element model in a medium containing a fault and elastic layers to show that anticlines growing over slipping reverse faults can be significantly amplified by mechanical layering buckling under horizontal shortening. Previous studies suggested that folds over blind reverse faults grow primarily during deformation increments associated with slips on the fault during and immediately after earthquakes. Under this assumption, the potential for earthquakes on blind faults can be determined directly from fold geometry because the amount of slip on the fault can be estimated directly from the fold geometry using the solution for a dislocation in an elastic half-space. Studies that assume folds grown solely by slip on a fault may therefore significantly overestimate fault slip. Our boundary element technique demonstrates that the fold amplitude produced in a medium containing a fault and elastic layers with free slip and subjected to layer-parallel shortening can grow to more than twice the fold amplitude produced in homogeneous media without mechanical layering under the same amount of shortening. In addition, the fold wavelengths produced by the combined fault slip and buckling mechanisms may be narrower than folds produced by fault slip in an elastic half space by a factor of two. We also show that subsurface fold geometry of the Kettleman Hills Anticline in Central California inferred from seismic reflection image is consistent with a model that incorporates layer buckling over a dipping, blind reverse fault and the coseismic uplift pattern produced during a 1985 earthquake centered over the anticline forelimb is predicted by the model.

  14. Model Based Fault Detection in a Centrifugal Pump Application

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural consideration...... is capable of detecting four different faults in the mechanical and hydraulic parts of the pump.......A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...

  15. RAY TRACING RENDER MENGGUNAKAN FRAGMENT ANTI ALIASING

    Directory of Open Access Journals (Sweden)

    Febriliyan Samopa

    2008-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Rendering is generating surface and three-dimensional effects on an object displayed on a monitor screen. Ray tracing as a rendering method that traces ray for each image pixel has a drawback, that is, aliasing (jaggies effect. There are some methods for executing anti aliasing. One of those methods is OGSS (Ordered Grid Super Sampling. OGSS is able to perform aliasing well. However, this method requires more computation time since sampling of all pixels in the image will be increased. Fragment Anti Aliasing (FAA is a new alternative method that can cope with the drawback. FAA will check the image when performing rendering to a scene. Jaggies effect is only happened at curve and gradient object. Therefore, only this part of object that will experience sampling magnification. After this sampling magnification and the pixel values are computed, then downsample is performed to retrieve the original pixel values. Experimental results show that the software can implement ray tracing well in order to form images, and it can implement FAA and OGSS technique to perform anti aliasing. In general, rendering using FAA is faster than using OGSS

  16. Bond graph model-based fault diagnosis of hybrid systems

    CERN Document Server

    Borutzky, Wolfgang

    2015-01-01

    This book presents a bond graph model-based approach to fault diagnosis in mechatronic systems appropriately represented by a hybrid model. The book begins by giving a survey of the fundamentals of fault diagnosis and failure prognosis, then recalls state-of-art developments referring to latest publications, and goes on to discuss various bond graph representations of hybrid system models, equations formulation for switched systems, and simulation of their dynamic behavior. The structured text: • focuses on bond graph model-based fault detection and isolation in hybrid systems; • addresses isolation of multiple parametric faults in hybrid systems; • considers system mode identification; • provides a number of elaborated case studies that consider fault scenarios for switched power electronic systems commonly used in a variety of applications; and • indicates that bond graph modelling can also be used for failure prognosis. In order to facilitate the understanding of fault diagnosis and the presented...

  17. Simple model for fault-charged hydrothermal systems

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Miller, C.W.; Benson, S.M.

    1981-06-01

    A two-dimensional transient model of fault-charged hydrothermal systems has been developed. The model can be used to analyze temperature data from fault-charged hydrothermal systems, estimate the recharge rate from the fault, and determine how long the system has been under natural development. The model can also be used for theoretical studies of the development of fault-controlled hydrothermal systems. The model has been tentatively applied to the low-temperature hydrothermal system at Susanville, California. A resonable match was obtained with the observed temperature data, and a hot water recharge rate of 9 x 10{sup -6} m{sup 3}s/m was calculated.

  18. Fault Diagnosis of Nonlinear Systems Using Structured Augmented State Models

    Institute of Scientific and Technical Information of China (English)

    Jochen Aβfalg; Frank Allg(o)wer

    2007-01-01

    This paper presents an internal model approach for modeling and diagnostic functionality design for nonlinear systems operating subject to single- and multiple-faults. We therefore provide the framework of structured augmented state models. Fault characteristics are considered to be generated by dynamical exosystems that are switched via equality constraints to overcome the augmented state observability limiting the number of diagnosable faults. Based on the proposed model, the fault diagnosis problem is specified as an optimal hybrid augmented state estimation problem. Sub-optimal solutions are motivated and exemplified for the fault diagnosis of the well-known three-tank benchmark. As the considered class of fault diagnosis problems is large, the suggested approach is not only of theoretical interest but also of high practical relevance.

  19. Fault Modeling of ECL for High Fault Coverage of Physical Defects

    Directory of Open Access Journals (Sweden)

    Sankaran M. Menon

    1996-01-01

    Full Text Available Bipolar Emitter Coupled Logic (ECL devices can now be fabricated at higher densities and consumes much lower power. Behaviour of simple and complex ECL gates are examined in the presence of physical faults. The effectiveness of the classical stuck-at model in representing physical failures in ECL gates is examined. It is shown that the conventional stuck-at fault model cannot represent a majority of circuit level faults. A new augmented stuck-at fault model is presented which provides a significantly higher coverage of physical failures. The model may be applicable to other logic families that use logic gates with both true and complementary outputs. A design for testability approach is suggested for on-line detection of certain error conditions occurring in gates with true and complementary outputs which is a normal implementation for ECL devices.

  20. Sensor Fault Tolerant Generic Model Control for Nonlinear Systems

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A modified Strong Tracking Filter (STF) is used to develop a new approach to sensor fault tolerant control. Generic Model Control (GMC) is used to control the nonlinear process while the process runs normally because of its robust control performance. If a fault occurs in the sensor, a sensor bias vector is then introduced to the output equation of the process model. The sensor bias vector is estimated on-line during every control period using the STF. The estimated sensor bias vector is used to develop a fault detection mechanism to supervise the sensors. When a sensor fault occurs, the conventional GMC is switched to a fault tolerant control scheme, which is, in essence, a state estimation and output prediction based GMC. The laboratory experimental results on a three-tank system demonstrate the effectiveness of the proposed Sensor Fault Tolerant Generic Model Control (SFTGMC) approach.

  1. Diagnosing process faults using neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  2. Model-based fault diagnosis in PEM fuel cell systems

    Energy Technology Data Exchange (ETDEWEB)

    Escobet, T.; de Lira, S.; Puig, V.; Quevedo, J. [Automatic Control Department (ESAII), Universitat Politecnica de Catalunya (UPC), Rambla Sant Nebridi 10, 08222 Terrassa (Spain); Feroldi, D.; Riera, J.; Serra, M. [Institut de Robotica i Informatica Industrial (IRI), Consejo Superior de Investigaciones Cientificas (CSIC), Universitat Politecnica de Catalunya (UPC) Parc Tecnologic de Barcelona, Edifici U, Carrer Llorens i Artigas, 4-6, Planta 2, 08028 Barcelona (Spain)

    2009-07-01

    In this work, a model-based fault diagnosis methodology for PEM fuel cell systems is presented. The methodology is based on computing residuals, indicators that are obtained comparing measured inputs and outputs with analytical relationships, which are obtained by system modelling. The innovation of this methodology is based on the characterization of the relative residual fault sensitivity. To illustrate the results, a non-linear fuel cell simulator proposed in the literature is used, with modifications, to include a set of fault scenarios proposed in this work. Finally, it is presented the diagnosis results corresponding to these fault scenarios. It is remarkable that with this methodology it is possible to diagnose and isolate all the faults in the proposed set in contrast with other well known methodologies which use the binary signature matrix of analytical residuals and faults. (author)

  3. Fuzzy delay model based fault simulator for crosstalk delay fault test generation in asynchronous sequential circuits

    Indian Academy of Sciences (India)

    S Jayanthy; M C Bhuvaneswari

    2015-02-01

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps. The fault simulator based on fuzzy delay detects unstable states, oscillations and non-confluence of settling states in asynchronous sequential circuits. The fuzzy delay model based fault simulator is used to validate the test patterns produced by Elitist Non-dominated sorting Genetic Algorithm (ENGA) based test generator, for detecting crosstalk delay faults in asynchronous sequential circuits. The multi-objective genetic algorithm, ENGA targets two objectives of maximizing fault coverage and minimizing number of transitions. Experimental results are tabulated for SIS benchmark circuits for three gate delay models, namely unit delay model, rise/fall delay model and fuzzy delay model. Experimental results indicate that test validation using fuzzy delay model is more accurate than unit delay model and rise/fall delay model.

  4. Model Based Incipient Fault Detection for Gear Drives

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents the method of model based incipient fault detection for gear drives,this method is based on parity space method. It can generate the robust residual that is maximally sensitive to the fault caused by the change of the parameters. The example of simulation shows the application of the method, and the residual waves have different characteristics due to different parameter changes; one can detect and isolate the fault based on the different characteristics.

  5. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  6. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter;

    2016-01-01

    that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates.......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...

  7. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    Science.gov (United States)

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  8. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Haenninen, S. [VTT Energy, Espoo (Finland); Seppaenen, M. [North-Carelian Power Co (Finland); Antila, E.; Markkila, E. [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  9. Fault evolution-test dependency modeling for mechanical systems

    Institute of Scientific and Technical Information of China (English)

    Xiao-dong TAN; Jian-lu LUO; Qing LI; Bing LU; Jing QIU

    2015-01-01

    Tracking the process of fault growth in mechanical systems using a range of tests is important to avoid catastrophic failures. So, it is necessary to study the design for testability (DFT). In this paper, to improve the testability performance of me-chanical systems for tracking fault growth, a fault evolution-test dependency model (FETDM) is proposed to implement DFT. A testability analysis method that considers fault trackability and predictability is developed to quantify the testability performance of mechanical systems. Results from experiments on a centrifugal pump show that the proposed FETDM and testability analysis method can provide guidance to engineers to improve the testability level of mechanical systems.

  10. Anti-Aliasing filter for reverse-time migration

    KAUST Repository

    Zhan, Ge

    2012-01-01

    We develop an anti-aliasing filter for reverse-time migration (RTM). It is similar to the traditional anti-aliasing filter used for Kirchhoff migration in that it low-pass filters the migration operator so that the dominant wavelength in the operator is greater than two times the trace sampling interval, except it is applied to both primary and multiple reflection events. Instead of applying this filter to the data in the traditional RTM operation, we apply the anti-aliasing filter to the generalized diffraction-stack migration operator. This gives the same migration image as computed by anti-aliased RTM. Download

  11. Development of a fault test experimental facility model using Matlab

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez; Moraes, Davi Almeida, E-mail: martinez@ipen.br, E-mail: dmoraes@dk8.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The Fault Test Experimental Facility was developed to simulate a PWR nuclear power plant and is instrumented with temperature, level and pressure sensors. The Fault Test Experimental Facility can be operated to generate normal and fault data, and these failures can be added initially small, and their magnitude being increasing gradually. This work presents the Fault Test Experimental Facility model developed using the Matlab GUIDE (Graphical User Interface Development Environment) toolbox that consists of a set of functions designed to create interfaces in an easy and fast way. The system model is based on the mass and energy inventory balance equations. Physical as well as operational aspects are taken into consideration. The interface layout looks like a process flowchart and the user can set the input variables. Besides the normal operation conditions, there is the possibility to choose a faulty variable from a list. The program also allows the user to set the noise level for the input variables. Using the model, data were generated for different operational conditions, both under normal and fault conditions with different noise levels added to the input variables. Data generated by the model will be compared with Fault Test Experimental Facility data. The Fault Test Experimental Facility theoretical model results will be used for the development of a Monitoring and Fault Detection System. (author)

  12. Modeling and Fault Simulation of Propellant Filling System

    Science.gov (United States)

    Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo

    2012-05-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  13. Modelling earthquake ruptures with dynamic off-fault damage

    Science.gov (United States)

    Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban

    2017-04-01

    Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for

  14. Research and application of hierarchical model for multiple fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    An Ruoming; Jiang Xingwei; Song Zhengji

    2005-01-01

    Computational complexity of complex system multiple fault diagnosis is a puzzle at all times. Based on the well-known Mozetic's approach, a novel hierarchical model-based diagnosis methodology is put forward for improving efficiency of multi-fault recognition and localization. Structural abstraction and weighted fault propagation graphs are combined to build diagnosis model. The graphs have weighted arcs with fault propagation probabilities and propagation strength. For solving the problem of coupled faults, two diagnosis strategies are used: one is the Lagrangian relaxation and the primal heuristic algorithms; another is the method of propagation strength. Finally, an applied example shows the applicability of the approach and experimental results are given to show the superiority of the presented technique.

  15. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    Science.gov (United States)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from

  16. Extended Approximate String Matching Algorithms To Detect Name Aliases

    DEFF Research Database (Denmark)

    Shaikh, Muniba; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    . An extension to widely used ASM algorithms is proposed to detect the name aliases that generate as a result of transliteration. This paper aims to improve the accuracy of the basic ASM algorithms in order to detect correct aliases. The experimental evaluation shows that proposed extension increases...

  17. Anti-aliasing optical method for Shack Hartmann WFSs

    Science.gov (United States)

    Herriot, Glen; Véran, Jean-Pierre

    2016-07-01

    Measurement errors due to aliasing in a Shack-Hartmann WFS are typically 40% larger in variance than the fitting error of an AO system. On bright stars, aliasing is the dominant error within the control radius of the deformable mirror. Wavefront spatial frequencies beyond the WFS' Nyquist frequency corrupt measurements below this frequency. A common misconception is to think that aliasing primarily affects the higher spatial frequency measurements. But in fact aliasing propagates to the lowest order modes, and corrupts even tip/tilt. There are many examples including the observation that the temporal power spectrum of measured tip/tilt from a WFS does not correspond to Kolmogorov theory. We propose a simple optical modification to a SH WFS (borrowed from the digital video camera industry), and present simulation results showing that the aliasing errors are reduced.

  18. Stochastic finite-fault modelling of strong earthquakes in Narmada South Fault, Indian Shield

    Indian Academy of Sciences (India)

    P Sengupta

    2012-06-01

    The Narmada South Fault in the Indian peninsular shield region is associated with moderate-to-strong earthquakes. The prevailing hazard evidenced by the earthquake-related fatalities in the region imparts significance to the investigations of the seismogenic environment. In the present study, the prevailing seismotectonic conditions specified by parameters associated with source, path and site conditions are appraised. Stochastic finite-fault models are formulated for each scenario earthquake. The simulated peak ground accelerations for the rock sites from the possible mean maximum earthquake of magnitude 6.8 goes as high as 0.24 g while fault-rupture of magnitude 7.1 exhibits a maximum peak ground acceleration of 0.36 g. The results suggest that present hazard specification of Bureau of Indian Standards as inadequate. The present study is expected to facilitate development of ground motion models for deterministic and probabilistic seismic hazard analysis of the region.

  19. Fault Management: Degradation Signature Detection, Modeling, and Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault to Failure Progression (FFP) signature modeling and processing is a new method for applying condition-based signal data to detect degradation, to identify...

  20. Autoregressive modelling for rolling element bearing fault diagnosis

    Science.gov (United States)

    Al-Bugharbee, H.; Trendafilova, I.

    2015-07-01

    In this study, time series analysis and pattern recognition analysis are used effectively for the purposes of rolling bearing fault diagnosis. The main part of the suggested methodology is the autoregressive (AR) modelling of the measured vibration signals. This study suggests the use of a linear AR model applied to the signals after they are stationarized. The obtained coefficients of the AR model are further used to form pattern vectors which are in turn subjected to pattern recognition for differentiating among different faults and different fault sizes. This study explores the behavior of the AR coefficients and their changes with the introduction and the growth of different faults. The idea is to gain more understanding about the process of AR modelling for roller element bearing signatures and the relation of the coefficients to the vibratory behavior of the bearings and their condition.

  1. Fault Model for Testable Reversible Toffoli Gates

    Directory of Open Access Journals (Sweden)

    Yu Pang

    2012-09-01

    Full Text Available Techniques of reversible circuits can be used in low-power microchips and quantum communications. Current most works focuses on synthesis of reversible circuits but seldom for fault testing which is sure to be an important step in any robust implementation. In this study, we propose a Universal Toffoli Gate (UTG with four inputs which can realize all basic Boolean functions. The all single stuck-at faults are analyzed and a test-set with minimum test vectors is given. Using the proposed UTG, it is easy to implement a complex reversible circuit and test all stuck-at faults of the circuit. The experiments show that reversible circuits constructed by the UTGs have less quantum cost and test vectors compared to other works.

  2. Salt movements and faulting of the overburden - can numerical modeling predict the fault patterns above salt structures?

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Wesenberg, Rasmus

    among other things the productivity due to the segmentation of the reservoir (Stewart 2006). 3D seismic data above salt structures can map such fault patterns in great detail and studies have shown that a variety of fault patterns exists. Yet, most patterns fall between two end members: concentric...... and radiating fault patterns. Here we use a modified version of the numerical spring-slider model introduced by Malthe-Sørenssen et al.(1998a) for simulating the emergence of small scale faults and fractures above a rising salt structure. The three-dimensional spring-slider model enables us to control....... The modeling shows that purely vertical movement of the salt introduces a mesh of concentric normal faults in the overburden, and that the frequency of radiating faults increases with the amount of lateral movements across the salt-overburden interface. The two end-member fault patterns (concentric vs...

  3. Hidden Markov Model Based Automated Fault Localization for Integration Testing

    OpenAIRE

    Ge, Ning; NAKAJIMA, SHIN; Pantel, Marc

    2013-01-01

    International audience; Integration testing is an expensive activity in software testing, especially for fault localization in complex systems. Model-based diagnosis (MBD) provides various benefits in terms of scalability and robustness. In this work, we propose a novel MBD approach for the automated fault localization in integration testing. Our method is based on Hidden Markov Model (HMM) which is an abstraction of system's component to simulate component's behaviour. The core of this metho...

  4. Nonlinear sensor fault diagnosis using mixture of probabilistic PCA models

    Science.gov (United States)

    Sharifi, Reza; Langari, Reza

    2017-02-01

    This paper presents a methodology for sensor fault diagnosis in nonlinear systems using a Mixture of Probabilistic Principal Component Analysis (MPPCA) models. This methodology separates the measurement space into several locally linear regions, each of which is associated with a Probabilistic PCA (PPCA) model. Using the transformation associated with each PPCA model, a parity relation scheme is used to construct a residual vector. Bayesian analysis of the residuals forms the basis for detection and isolation of sensor faults across the entire range of operation of the system. The resulting method is demonstrated in its application to sensor fault diagnosis of a fully instrumented HVAC system. The results show accurate detection of sensor faults under the assumption that a single sensor is faulty.

  5. Accuracy of flow convergence estimates of mitral regurgitant flow rates obtained by use of multiple color flow Doppler M-mode aliasing boundaries: an experimental animal study.

    Science.gov (United States)

    Zhang, J; Jones, M; Shandas, R; Valdes-Cruz, L M; Murillo, A; Yamada, I; Kang, S U; Weintraub, R G; Shiota, T; Sahn, D J

    1993-02-01

    The proximal flow convergence method of multiplying color Doppler aliasing velocity by flow convergence surface area has yielded a new means of quantifying flow rate by noninvasively derived measurements. Unlike previous methods of visualizing the turbulent jet of mitral regurgitation on color flow Doppler mapping, flow convergence methods are less influenced by machine factors because of the systematic structure of the laminar flow convergence region. However, recent studies have demonstrated that the flow rate calculated from the first aliasing boundary of color flow Doppler imaging is dependent on orifice size, flow rate, aliasing velocity and therefore on the distance from the orifice chosen for measurement. In this study we calculated the regurgitant flow rates acquired by use of multiple proximal aliasing boundaries on color Doppler M-mode traces and assessed the effect of distances of measurement and aliasing velocities on the calculated regurgitant flow rate. Six sheep with surgically induced mitral regurgitation were studied. The distances from the mitral valve leaflet M-mode line to the first, second, and third sequential aliasing boundaries on color Doppler M-mode traces were measured and converted to the regurgitant flow rates calculated by applying the hemispheric flow equation and averaging instantaneous flow rates throughout systole. The flow rates that were calculated from the first, second, and third aliasing boundaries correlated well with the actual regurgitant flow rates (r = 0.91 to 0.96). The mean percentage error from the actual flow rates were 151% for the first aliasing boundary, 7% for the second aliasing boundary, and -43% for the third aliasing boundary; and the association between aliasing velocities and calculated flow rates indicates an inverse relationship, which suggests that in this model, there were limited velocity-distance combinations that fit with a hemispheric assumption for flow convergence geometry. The second aliasing

  6. Fault Tolerant Control Using Gaussian Processes and Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Yang Xiaoke

    2015-03-01

    Full Text Available Essential ingredients for fault-tolerant control are the ability to represent system behaviour following the occurrence of a fault, and the ability to exploit this representation for deciding control actions. Gaussian processes seem to be very promising candidates for the first of these, and model predictive control has a proven capability for the second. We therefore propose to use the two together to obtain fault-tolerant control functionality. Our proposal is illustrated by several reasonably realistic examples drawn from flight control.

  7. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  8. Stator Fault Detection in Induction Motors by Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    Francisco M. Garcia-Guevara

    2016-01-01

    Full Text Available This study introduces a novel methodology for early detection of stator short circuit faults in induction motors by using autoregressive (AR model. The proposed algorithm is based on instantaneous space phasor (ISP module of stator currents, which are mapped to α-β stator-fixed reference frame; then, the module is obtained, and the coefficients of the AR model for such module are estimated and evaluated by order selection criterion, which is used as fault signature. For comparative purposes, a spectral analysis of the ISP module by Discrete Fourier Transform (DFT is performed; a comparison of both methodologies is obtained. To demonstrate the suitability of the proposed methodology for detecting and quantifying incipient short circuit stator faults, an induction motor was altered to induce different-degree fault scenarios during experimentation.

  9. Phase response curves for models of earthquake fault dynamics

    CERN Document Server

    Franović, Igor; Perc, Matjaz; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-01-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a ...

  10. Phase response curves for models of earthquake fault dynamics

    Science.gov (United States)

    Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-06-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  11. Modeling Technology in Traveling-Wave Fault Location

    Directory of Open Access Journals (Sweden)

    Tang Jinrui

    2013-06-01

    Full Text Available Theoretical research and equipment development of traveling-wave fault location seriously depend on digital simulation. Meanwhile, the fault-generated transient traveling wave must be transferred through transmission line, mutual inductor and secondary circuit before it is used. So this paper would maily analyze and summarize the modeling technology of transmission line and mutual inductor on the basis of the research achievement. Firstly several models of transmission line (multiple Π or T line model, Bergeron line model and frequency-dependent line model are compared in this paper with analysis of wave-front characteristics and characteristic frequency of traveling wave. Then modeling methods of current transformer, potential transformer, capacitive voltage transformer, special traveling-wave sensor and secondary cable are given. Finally, based on the difficult and latest research achievements, the future trend of modeling technology in traveling-wave fault location is prospected.  

  12. Reduction of ocean tide aliasing in the context of a next generation gravity field mission

    Science.gov (United States)

    Hauk, Markus; Daras, Ilias; Pail, Roland

    2017-04-01

    Ocean tide aliasing is currently one of the main limiting factors for temporal gravity field determination and the derivation of mass transport processes in the Earth system. This will be true even more for future gravity field missions with improved measurement technology, which cannot be fully exploited due to this dominant systematic error source. In several previous studies it has been shown that temporal aliasing, related to tidal and non-tidal sources, can be significantly reduced by double-pair formations, e.g., in a so-called Bender configuration, and its effects can be migrated to higher frequencies by an optimum orbit choice, especially the orbit altitude (Murböck et al. 2013). Improved processing strategies and extended parameter models should be able to further reduce the problem. Concerning non-tidal aliasing, it could be shown that the parameterization of short-period long-wavelength gravity field signals, the so-called Wiese approach, is a powerful method for aliasing reduction (Wiese et al. 2013), but it does not really work for the very short-period signals of ocean tides with mainly semi-diurnal and diurnal periods (Daras 2015). In this contribution, several methods dealing with the reduction of ocean tide aliasing are investigated both from a methodological and a numerical point of view. One of the promising strategies is the co-estimation of selected tidal constituents over long time periods, also considering the basic orbit frequencies of the satellites. These improved estimates for ocean tide signals can then be used in a second step as an enhanced de-aliasing product for the computation of short-period temporal gravity fields. From a number of theoretical considerations and numerical case-studies, recommendations for an optimum orbit selection with respect to reduction of ocean tide aliasing shall be derived for two main mission scenarios. The first one is a classical Bender configuration being composed of a (near-) polar and an inclined in

  13. Overview of the Southern San Andreas Fault Model

    Science.gov (United States)

    Weldon, Ray J.; Biasi, Glenn P.; Wills, Chris J.; Dawson, Timothy E.

    2008-01-01

    This appendix summarizes the data and methodology used to generate the source model for the southern San Andreas fault. It is organized into three sections, 1) a section by section review of the geological data in the format of past Working Groups, 2) an overview of the rupture model, and 3) a manuscript by Biasi and Weldon (in review Bulletin of the Seismological Society of America) that describes the correlation methodology that was used to help develop the ?geologic insight? model. The goal of the Biasi and Weldon methodology is to quantify the insight that went into developing all A faults; as such it is in concept consistent with all other A faults but applied in a more quantitative way. The most rapidly slipping fault and the only known source of M~8 earthquakes in southern California is the San Andreas fault. As such it plays a special role in the seismic hazard of California, and has received special attention in the current Working Group. The underlying philosophy of the current Working Group is to model the recurrence behavior of large, rapidly slipping faults like the San Andreas from observed data on the size, distribution and timing of past earthquakes with as few assumptions about underlying recurrence behavior as possible. In addition, we wish to carry the uncertainties in the data and the range of reasonable extrapolations from the data to the final model. To accomplish this for the Southern San Andreas fault we have developed an objective method to combine all of the observations of size, timing, and distribution of past earthquakes into a comprehensive set of earthquake scenarios that each represent a possible history of earthquakes for the past ~1400 years. The scenarios are then ranked according to their overall consistency with the data and then the frequencies of all of the ruptures permitted by the current Working Group?s segmentation model are calculated. We also present 30-yr conditional probabilities by segment and compare to previous

  14. Open-Switch Fault Diagnosis and Fault Tolerant for Matrix Converter with Finite Control Set-Model Predictive Control

    DEFF Research Database (Denmark)

    Peng, Tao; Dan, Hanbing; Yang, Jian

    2016-01-01

    To improve the reliability of the matrix converter (MC), a fault diagnosis method to identify single open-switch fault is proposed in this paper. The introduced fault diagnosis method is based on finite control set-model predictive control (FCS-MPC), which employs a time-discrete model of the MC...... topology and a cost function to select the best switching state for the next sampling period. The proposed fault diagnosis method is realized by monitoring the load currents and judging the switching state to locate the faulty switch. Compared to the conventional modulation strategies such as carrier...

  15. Inverse Problems for Matrix Exponential in System Identification: System Aliasing

    OpenAIRE

    Yue, Zuogong; Thunberg, Johan; Goncalves, Jorge

    2016-01-01

    This note addresses identification of the $A$-matrix in continuous time linear dynamical systems on state-space form. If this matrix is partially known or known to have a sparse structure, such knowledge can be used to simplify the identification. We begin by introducing some general conditions for solvability of the inverse problems for matrix exponential. Next, we introduce "system aliasing" as an issue in the identification of slow sampled systems. Such aliasing give rise to non-unique mat...

  16. Modelling fault surface roughness and fault rocks thickness evolution with slip: calibration based on field and laboratory data

    Science.gov (United States)

    Bistacchi, A.; Tisato, N.; Spagnuolo, E.; Nielsen, S. B.; Di Toro, G.

    2012-12-01

    The architecture and physical properties of fault zones evolve with slip and time. Such evolution, which progressively modifies the type and thickness of fault rocks, the fault surface roughness, etc., controls the rheology of fault zones (seismic vs. aseismic) and earthquakes (main shock magnitude, coseismic slip distribution, stress drop, foreshock and aftershock sequence evolution, etc.). Seismogenic faults exhumed from 2-10 km depth and hosted in different rocks (carbonates, granitoids, etc.) show a (1) self-affine (Hurst exponent H definition of "wear" (including every process that destroys geometrical asperities and produces fault rocks). The output roughness and fault rock thickness depend on two parameters: (1) wear rate and (2) wear products (fault rocks) accumulation rate. To test the model we used surface roughness, fault rock thickness, and slip data collected in the field (Gole Larghe Fault Zone, Italian Southern Alps) and in the lab (rotary shear experiments on different rocks). The model was successful in predicting the first-order evolution of roughness and of fault rock thickness with slip in both natural and experimental datasets. Differences in best-fit model parameters (wear rate and wear products accumulation rate) were satisfactorily explained in terms of different deformation processes (e.g. frictional melting vs. cataclasis) and experimental conditions (unconfined vs. confined). Since the model is based on geometrical and volume-conservation considerations (and not on a particular deformation mechanism), we conclude that the surface roughness and fault-rock thickness after some slip is mostly determined by the initial roughness (measured over several orders of magnitude in wavelength), rather than the particular deformation process (cataclasis, melting, etc.) activated during faulting. Conveniently, since the model can be applied (under certain conditions) to surfaces which depart from self-affine roughness, the model parameters can be

  17. New Frontiers in Fault Model Visualization and Interaction

    Science.gov (United States)

    van Aalsburg, J.; Yikilmaz, M. B.; Kreylos, O.; Kellogg, L. H.; Rundle, J. B.

    2009-12-01

    Previously we introduced an interactive, 3D fault editor for virtual reality (VR) environments. This application is designed to provide an intuitive environment for visualizing and editing fault model data. It is being developed at the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://www.keckcaves.org). By utilizing high resolution Digital Elevation Models (DEM), georeferenced active tectonic fault maps and earthquake hypocenters, users can accurately position fault-segments including the dip angle. Once a model has been created or modified it can be written to an XML file; from there the data may be easily converted into various formats required by the analysis software or simulation. To demonstrate this we have written a simple program which generates a KML file from the program output for visualization of the model in Google Earth. Our current research has focused on the addition of new tools which enable the user to associate meta-data with individual fault segments or group of segments (i.e. slip rate). We have also added enhanced mapping abilities such as creating closed polygons for defining geologic formations. The program is designed to take full advantage of immersive environments such as a CAVE (walk-in VR environment), but works in a wide range of other environments including desktop systems and GeoWalls. This software is open-source can be freely downloaded (debian packages are also available).

  18. Constitutive models of faults in the viscoelastic lithosphere

    Science.gov (United States)

    Moresi, Louis; Muhlhaus, Hans; Mansour, John; Miller, Meghan

    2013-04-01

    Moresi and Muhlhaus (2006) presented an algorithm for describing shear band formation and evolution as a coallescence of small, planar, fricition-failure surfaces. This algorithm assumed that sliding initially occurs at the angle to the maximum compressive stress dictated by Anderson faulting theory and demonstrated that shear bands form with the same angle as the microscopic angle of initial failure. Here we utilize the same microscopic model to generate frictional slip on prescribed surfaces which represent faults of arbitrary geometry in the viscoelastic lithosphere. The faults are actually represented by anisotropic weak zones of finite width, but they are instantiated from a 2D manifold represented by a cloud of points with associated normals and mechanical/history properties. Within the hybrid particle / finite-element code, Underworld, this approach gives a very flexible mechanism for describing complex 3D geometrical patterns of faults with no need to mirror this complexity in the thermal/mechanical solver. We explore a number of examples to demonstrate the strengths and weaknesses of this particular approach including a 3D model of the deformation of Southern California which accounts for the major fault systems. L. Moresi and H.-B. Mühlhaus, Anisotropic viscous models of large-deformation Mohr-Coulomb failure. Philosophical Magazine, 86:3287-3305, 2006.

  19. An Approach to Computer Modeling of Geological Faults in 3D and an Application

    Institute of Scientific and Technical Information of China (English)

    ZHU Liang-feng; HE Zheng; PAN Xin; WU Xin-cai

    2006-01-01

    3D geological modeling, one of the most important applications in geosciences of 3D GIS, forms the basis and is a prerequisite for visualized representation and analysis of 3D geological data. Computer modeling of geological faults in 3D is currently a topical research area. Structural modeling techniques of complex geological entities containing reverse faults are discussed and a series of approaches are proposed. The geological concepts involved in computer modeling and visualization of geological fault in 3D are explained, the type of data of geological faults based on geological exploration is analyzed, and a normative database format for geological faults is designed. Two kinds of modeling approaches for faults are compared: a modeling technique of faults based on stratum recovery and a modeling technique of faults based on interpolation in subareas. A novel approach, called the Unified Modeling Technique for stratum and fault, is presented to solve the puzzling problems of reverse faults, syn-sedimentary faults and faults terminated within geological models. A case study of a fault model of bed rock in the Beijing Olympic Green District is presented in order to show the practical result of this method. The principle and the process of computer modeling of geological faults in 3D are discussed and a series of applied technical proposals established. It strengthens our profound comprehension of geological phenomena and the modeling approach, and establishes the basic techniques of 3D geological modeling for practical applications in the field of geosciences.

  20. Fuzzy model-based observers for fault detection in CSTR.

    Science.gov (United States)

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions.

  1. Numerical modelling of fault reactivation in carbonate rocks under fluid depletion conditions - 2D generic models with a small isolated fault

    Science.gov (United States)

    Zhang, Yanhua; Clennell, Michael B.; Delle Piane, Claudio; Ahmed, Shakil; Sarout, Joel

    2016-12-01

    This generic 2D elastic-plastic modelling investigated the reactivation of a small isolated and critically-stressed fault in carbonate rocks at a reservoir depth level for fluid depletion and normal-faulting stress conditions. The model properties and boundary conditions are based on field and laboratory experimental data from a carbonate reservoir. The results show that a pore pressure perturbation of -25 MPa by depletion can lead to the reactivation of the fault and parts of the surrounding damage zones, producing normal-faulting downthrows and strain localization. The mechanism triggering fault reactivation in a carbonate field is the increase of shear stresses with pore-pressure reduction, due to the decrease of the absolute horizontal stress, which leads to an expanded Mohr's circle and mechanical failure, consistent with the predictions of previous poroelastic models. Two scenarios for fault and damage-zone permeability development are explored: (1) large permeability enhancement of a sealing fault upon reactivation, and (2) fault and damage zone permeability development governed by effective mean stress. In the first scenario, the fault becomes highly permeable to across- and along-fault fluid transport, removing local pore pressure highs/lows arising from the presence of the initially sealing fault. In the second scenario, reactivation induces small permeability enhancement in the fault and parts of damage zones, followed by small post-reactivation permeability reduction. Such permeability changes do not appear to change the original flow capacity of the fault or modify the fluid flow velocity fields dramatically.

  2. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  3. Model-based fault detection and diagnosis in ALMA subsystems

    Science.gov (United States)

    Ortiz, José; Carrasco, Rodrigo A.

    2016-07-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) observatory, with its 66 individual telescopes and other central equipment, generates a massive set of monitoring data every day, collecting information on the performance of a variety of critical and complex electrical, electronic and mechanical components. This data is crucial for most troubleshooting efforts performed by engineering teams. More than 5 years of accumulated data and expertise allow for a more systematic approach to fault detection and diagnosis. This paper presents model-based fault detection and diagnosis techniques to support corrective and predictive maintenance in a 24/7 minimum-downtime observatory.

  4. Electric machines modeling, condition monitoring, and fault diagnosis

    CERN Document Server

    Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun

    2012-01-01

    With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi

  5. Stresses, deformation, and seismic events on scaled experimental faults with heterogeneous fault segments and comparison to numerical modeling

    Science.gov (United States)

    Buijze, Loes; Guo, Yanhuang; Niemeijer, André R.; Ma, Shengli; Spiers, Christopher J.

    2017-04-01

    Faults in the upper crust cross-cut many different lithologies, which cause the composition of the fault rocks to vary. Each different fault rock segment may have specific mechanical properties, e.g. there may be stronger and weaker segments, and segments prone to unstable slip or creeping. This leads to heterogeneous deformation and stresses along such faults, and a heterogeneous distribution of seismic events. We address the influence of fault variability on stress, deformation, and seismicity using a combination of scaled laboratory fault and numerical modeling. A vertical fault was created along the diagonal of a 30 x 20 x 5 cm block of PMMA, along which a 2 mm thick gouge layer was deposited. Gouge materials of different characteristics were used to create various segments along the fault; quartz (average strength, stable sliding), kaolinite (weak, stable sliding), and gypsum (average strength, unstable sliding). The sample assembly was placed in a horizontal biaxial deformation apparatus, and shear displacement was enforced along the vertical fault. Multiple observations were made: 1) Acoustic emissions were continuously recorded at 3 MHz to observe the occurrence of stick-slips (micro-seismicity), 2) Photo-elastic effects (indicative of the differential stress) were recorded in the transparent set of PMMA wall-rocks using a high-speed camera, and 3) particle tracking was conducted on a speckle painted set of PMMA wall-rocks to study the deformation in the wall-rocks flanking the fault. All three observation methods show how the heterogeneous fault gouge exerts a strong control on the fault behavior. For example, a strong, unstable segment of gypsum flanked by two weaker kaolinite segments show strong stress concentrations develop near the edges of the strong segment, with at the same time most of acoustic emissions being located at the edge of this strong segment. The measurements of differential stress, strain and acoustic emissions provide a strong means

  6. On the aliasing of the solar cycle in the lower stratospheric tropical temperature

    Science.gov (United States)

    Kuchar, Ales; Ball, William T.; Rozanov, Eugene V.; Stenke, Andrea; Revell, Laura; Miksovsky, Jiri; Pisoft, Petr; Peter, Thomas

    2017-09-01

    The double-peaked response of the tropical stratospheric temperature profile to the 11 year solar cycle (SC) has been well documented. However, there are concerns about the origin of the lower peak due to potential aliasing with volcanic eruptions or the El Niño-Southern Oscillation (ENSO) detected using multiple linear regression analysis. We confirm the aliasing using the results of the chemistry-climate model (CCM) SOCOLv3 obtained in the framework of the International Global Atmospheric Chemisty/Stratosphere-troposphere Processes And their Role in Climate Chemistry-Climate Model Initiative phase 1. We further show that even without major volcanic eruptions included in transient simulations, the lower stratospheric response exhibits a residual peak when historical sea surface temperatures (SSTs)/sea ice coverage (SIC) are used. Only the use of climatological SSTs/SICs in addition to background stratospheric aerosols removes volcanic and ENSO signals and results in an almost complete disappearance of the modeled solar signal in the lower stratospheric temperature. We demonstrate that the choice of temporal subperiod considered for the regression analysis has a large impact on the estimated profile signal in the lower stratosphere: at least 45 consecutive years are needed to avoid the large aliasing effect of SC maxima with volcanic eruptions in 1982 and 1991 in historical simulations, reanalyses, and observations. The application of volcanic forcing compiled for phase 6 of the Coupled Model Intercomparison Project (CMIP6) in the CCM SOCOLv3 reduces the warming overestimation in the tropical lower stratosphere and the volcanic aliasing of the temperature response to the SC, although it does not eliminate it completely.

  7. On the improved correlative prediction scheme for aliased electrocardiogram (ECG) data compression.

    Science.gov (United States)

    Gao, Xin

    2012-01-01

    An improved scheme for aliased electrocardiogram (ECG) data compression has been constructed, where the predictor exploits the correlative characteristics of adjacent QRS waveforms. The twin-R correlation prediction and lifting wavelet transform (LWT) for periodical ECG waves exhibits feasibility and high efficiency to achieve lower distortion rates with realizable compression ratio (CR); grey predictions via GM(1, 1) model have been adopted to evaluate the parametric performance for ECG data compression. Simulation results illuminate the validity of our approach.

  8. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    Science.gov (United States)

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  9. Modeling and characterization of partially inserted electrical connector faults

    Science.gov (United States)

    Tokgöz, ćaǧatay; Dardona, Sameh; Soldner, Nicholas C.; Wheeler, Kevin R.

    2016-03-01

    Faults within electrical connectors are prominent in avionics systems due to improper installation, corrosion, aging, and strained harnesses. These faults usually start off as undetectable with existing inspection techniques and increase in magnitude during the component lifetime. Detection and modeling of these faults are significantly more challenging than hard failures such as open and short circuits. Hence, enabling the capability to locate and characterize the precursors of these faults is critical for timely preventive maintenance and mitigation well before hard failures occur. In this paper, an electrical connector model based on a two-level nonlinear least squares approach is proposed. The connector is first characterized as a transmission line, broken into key components such as the pin, socket, and connector halves. Then, the fact that the resonance frequencies of the connector shift as insertion depth changes from a fully inserted to a barely touching contact is exploited. The model precisely captures these shifts by varying only two length parameters. It is demonstrated that the model accurately characterizes a partially inserted connector.

  10. Singular limit analysis of a model for earthquake faulting

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...

  11. A FINITE ELEMENT MODEL FOR SEISMICITY INDUCED BY FAULT INTERACTION

    Institute of Scientific and Technical Information of China (English)

    Chen Huaran; Li Yiqun; He Qiaoyun; Zhang Jieqing; Ma Hongsheng; Li Li

    2003-01-01

    On ths basis of interaction between faults, a finite element model for Southwest China is constructed, and the stress adjustment due to the strong earthquake occurrence in this region was studied. The preliminary results show that many strong earthquakes occurred in the area of increased stress in the model. Though the results are preliminary, the quasi-3D finite element model is meaningful for strong earthquake prediction.

  12. A FINITE ELEMENT MODEL FOR SEISMICITY INDUCED BY FAULT INTERACTION

    Institute of Scientific and Technical Information of China (English)

    ChenHuaran; LiYiqun; HeQiaoyun; ZhangJieqing; MaHongsheng; LiLi

    2003-01-01

    On ths basis of interaction between faults, a finite element model for Southwest China is constructed, and the stress adjustment due to the strong earthquake occurrence in this region was studied. The preliminary results show that many strong earthquakes occurred in the are a of increased stress in the model. Though the results are preliminary, the quasi-3D finite element model is meaningful for strong earthquake prediction.

  13. Fault-diagnosis applications. Model-based condition monitoring. Acutators, drives, machinery, plants, sensors, and fault-tolerant systems

    Energy Technology Data Exchange (ETDEWEB)

    Isermann, Rolf [Technische Univ. Darmstadt (DE). Inst. fuer Automatisierungstechnik (IAT)

    2011-07-01

    Supervision, condition-monitoring, fault detection, fault diagnosis and fault management play an increasing role for technical processes and vehicles in order to improve reliability, availability, maintenance and lifetime. For safety-related processes fault-tolerant systems with redundancy are required in order to reach comprehensive system integrity. This book is a sequel of the book ''Fault-Diagnosis Systems'' published in 2006, where the basic methods were described. After a short introduction into fault-detection and fault-diagnosis methods the book shows how these methods can be applied for a selection of 20 real technical components and processes as examples, such as: Electrical drives (DC, AC) Electrical actuators Fluidic actuators (hydraulic, pneumatic) Centrifugal and reciprocating pumps Pipelines (leak detection) Industrial robots Machine tools (main and feed drive, drilling, milling, grinding) Heat exchangers Also realized fault-tolerant systems for electrical drives, actuators and sensors are presented. The book describes why and how the various signal-model-based and process-model-based methods were applied and which experimental results could be achieved. In several cases a combination of different methods was most successful. The book is dedicated to graduate students of electrical, mechanical, chemical engineering and computer science and for engineers. (orig.)

  14. Modeling of Stress Triggered Faulting at Agenor Linea, Europa

    Science.gov (United States)

    Nahm, A. L.; Cameron, M. E.; Smith-Konter, B. R.; Pappalardo, R. T.

    2012-04-01

    To better understand the role of tidal stress sources and implications for faulting on Europa, we investigate the relationship between shear and normal stresses at Agenor Linea (AL), a ~1500 km long, E-W trending, 20-30 km wide zone of geologically young deformation located in the southern hemisphere of Europa which forks into two branches at its eastern end. The orientation of AL is consistent with tensile stresses resulting from long-term decoupled ice shell rotation (non-synchronous rotation [NSR]) as well as dextral shear stresses due to diurnal flexure of the ice shell. Its brightness and lack of cross-cutting features make AL a candidate for recent or current activity. Several observations indicate that right-lateral strike-slip faulting has occurred, such as left-stepping en echelon fractures in the northern portion of AL and the presence of an imbricate fan or horsetail complex at AL's western end. To calculate tidal stresses on Europa, we utilize SatStress, a numerical code that calculates tidal stresses at any point on the surface of a satellite for both diurnal and NSR stresses. We adopt SatStress model parameters appropriate to a spherically symmetric ice shell of thickness 20 km, underlain by a global subsurface ocean: shear modulus G = 3.5 GPa, Poisson ratio ν = 0.33, gravity g= 1.32 m/s2, ice density ρ = 920 kg/m3, satellite radius R= 1.56 x 103 km, satellite mass M= 4.8 x 1022 kg, semimajor axis a= 6.71 x 105 km, and eccentricity e= 0.0094. In this study we assume a coefficient of friction μ = 0.6 and consider a range of vertical fault depths zto 6 km. To assess shear failure at AL, we adopt a model based on the Coulomb failure criterion. This model balances stresses that promote and resist the motion of a fault, simultaneously accounting for both normal and shear tidal and NSR stresses, the coefficient of friction of ice, and additional stress at depth due to the overburden pressure. In this model, tidal shear stresses drive strike-slip motions

  15. Experimental Modeling of Dynamic Shallow Dip-Slip Faulting

    Science.gov (United States)

    Uenishi, K.

    2010-12-01

    In our earlier study (AGU 2005, SSJ 2005, JPGU 2006), using a finite difference technique, we have conducted some numerical simulations related to the source dynamics of shallow dip-slip earthquakes, and suggested the possibility of the existence of corner waves, i.e., shear waves that carry concentrated kinematic energy and generate extremely strong particle motions on the hanging wall of a nonvertical fault. In the numerical models, a dip-slip fault is located in a two-dimensional, monolithic linear elastic half space, and the fault plane dips either vertically or 45 degrees. We have investigated the seismic wave field radiated by crack-like rupture of this straight fault. If the fault rupture, initiated at depth, arrests just below or reaches the free surface, four Rayleigh-type pulses are generated: two propagating along the free surface into the opposite directions to the far field, the other two moving back along the ruptured fault surface (interface) downwards into depth. These downward interface pulses may largely control the stopping phase of the dynamic rupture, and in the case the fault plane is inclined, on the hanging wall the interface pulse and the outward-moving Rayleigh surface pulse interact with each other and the corner wave is induced. On the footwall, the ground motion is dominated simply by the weaker Rayleigh pulse propagating along the free surface because of much smaller interaction between this Rayleigh and the interface pulse. The generation of the downward interface pulses and corner wave may play a crucial role in understanding the effects of the geometrical asymmetry on the strong motion induced by shallow dip-slip faulting, but it has not been well recognized so far, partly because those waves are not expected for a fault that is located and ruptures only at depth. However, the seismological recordings of the 1999 Chi-Chi, Taiwan, the 2004 Niigata-ken Chuetsu, Japan, earthquakes as well as a more recent one in Iwate-Miyagi Inland

  16. Model-Based Methods for Fault Diagnosis: Some Guide-Lines

    DEFF Research Database (Denmark)

    Patton, R.J.; Chen, J.; Nielsen, S.B.

    1995-01-01

    This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties.......This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties....

  17. Fault diagnosis model based on multi-manifold learning and PSO-SVM for machinery

    Institute of Scientific and Technical Information of China (English)

    Wang Hongjun; Xu Xiaoli; Rosen B G

    2014-01-01

    Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold learning and particle swarm optimization support vector machine (PSO-SVM) is studied. This fault diagnosis model is used for a rolling bearing experimental of three kinds faults. The results are verified that this model based on multi-manifold learning and PSO-SVM is good at the fault sensitive features acquisition with effective accuracy.

  18. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  19. Multiple Local Reconstruction Model-based Fault Diagnosis for Continuous Processes

    Institute of Scientific and Technical Information of China (English)

    ZHAO Chun-Hui; LI Wen-Qing; SUN You-Xian; GAO Fu-Rong

    2013-01-01

    In the present work,the multiplicity of fault characteristics is proposed and analyzed to improve the fault diagnosis performance.It is based on the following recognition that the underlying fault characteristics in general do not stay constant but will present changes along the time direction.That is,the fault process reveals different variable correlations across different time periods.To analyze the multiplicity of fault characteristics,a fault division algorithm is developed to divide the fault process into multiple local time periods where the fault characteristics are deemed similar within the same local time period.Then a representative fault decomposition model is built in each local time period to reveal the relationships between the fault and normal operation status.In this way,these different fault characteristics can be modeled respectively.The proposed method gives an interesting insight into the fault evolvement behaviors and a more accurate from-fault-to-normal reconstruction result can be expected for fault diagnosis.The feasibility and performance of the proposed fault diagnosis method are illustrated with the Tennessee Eastman process.

  20. Transposing an active fault database into a seismic hazard fault model for nuclear facilities - Part 1: Building a database of potentially active faults (BDFA) for metropolitan France

    Science.gov (United States)

    Jomard, Hervé; Cushing, Edward Marc; Palumbo, Luigi; Baize, Stéphane; David, Claire; Chartier, Thomas

    2017-09-01

    The French Institute for Radiation Protection and Nuclear Safety (IRSN), with the support of the Ministry of Environment, compiled a database (BDFA) to define and characterize known potentially active faults of metropolitan France. The general structure of BDFA is presented in this paper. BDFA reports to date 136 faults and represents a first step toward the implementation of seismic source models that would be used for both deterministic and probabilistic seismic hazard calculations. A robustness index was introduced, highlighting that less than 15 % of the database is controlled by reasonably complete data sets. An example of transposing BDFA into a fault source model for PSHA (probabilistic seismic hazard analysis) calculation is presented for the Upper Rhine Graben (eastern France) and exploited in the companion paper (Chartier et al., 2017, hereafter Part 2) in order to illustrate ongoing challenges for probabilistic fault-based seismic hazard calculations.

  1. Sampling rate and aliasing on a virtual laboratory

    Directory of Open Access Journals (Sweden)

    Mihai Bogdan

    2009-10-01

    Full Text Available The sampling frequency determines thequality of the analog signal that is converted. Highersampling frequency achieves better conversion of theanalog signals. The minimum sampling frequencyrequired to represent the signal should at least be twicethe maximum frequency of the analog signal undertest (this is called the Nyquist rate. In the followingvirtual instrument, an example of sampling is shown.If the sampling frequency is equal or less then twicethe frequency of the input signal, a signal of lowerfrequency is generated from such a process (this iscalled aliasing.The goal of this paper is to teach students basicconcepts of sampling rate and aliasing, to becomefamiliar with this concepts.

  2. Modeling and simulation of longwall scraper conveyor considering operational faults

    Science.gov (United States)

    Cenacewicz, Krzysztof; Katunin, Andrzej

    2016-06-01

    The paper provides a description of analytical model of a longwall scraper conveyor, including its electrical, mechanical, measurement and control actuating systems, as well as presentation of its implementation in the form of computer simulator in the Matlab®/Simulink® environment. Using this simulator eight scenarios typical of usual operational conditions of an underground scraper conveyor can be generated. Moreover, the simulator provides a possibility of modeling various operational faults and taking into consideration a measurement noise generated by transducers. The analysis of various combinations of scenarios of operation and faults with description is presented. The simulator developed may find potential application in benchmarking of diagnostic systems, testing of algorithms of operational control or can be used for supporting the modeling of real processes occurring in similar systems.

  3. Adaptive partitioning PCA model for improving fault detection and isolation☆

    Institute of Scientific and Technical Information of China (English)

    Kangling Liu; Xin Jin; Zhengshun Fei; Jun Liang

    2015-01-01

    In chemical process, a large number of measured and manipulated variables are highly correlated. Principal com-ponent analysis (PCA) is widely applied as a dimension reduction technique for capturing strong correlation un-derlying in the process measurements. However, it is difficult for PCA based fault detection results to be interpreted physical y and to provide support for isolation. Some approaches incorporating process knowledge are developed, but the information is always shortage and deficient in practice. Therefore, this work proposes an adaptive partitioning PCA algorithm entirely based on operation data. The process feature space is partitioned into several sub-feature spaces. Constructed sub-block models can not only reflect the local behavior of process change, namely to grasp the intrinsic local information underlying the process changes, but also improve the fault detection and isolation through the combination of local fault detection results and reduction of smearing effect. The method is demonstrated in TE process, and the results show that the new method is much better in fault detection and isolation compared to conventional PCA method.

  4. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.; Haves, Philip; Sohn, Michael D.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models are imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.

  5. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin

    2016-04-06

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  6. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  7. Seismicity and fluid injections: numerical modelling of fault activation

    Science.gov (United States)

    Murphy, S.; O'Brien, G.; Bean, C.; McCloskey, J.; Nalbant, S.

    2012-04-01

    Injection of fluid into the subsurface is a common technique and is used to optimise returns from hydrocarbon plays (e.g. enhanced oil recovery, hydrofacturing of shales) and geothermal sites as well as for the sequestering carbon dioxide. While it is well understood that stress perturbations caused by fluid injections can induce/trigger earthquakes; the modelling of such hazard is still in its infancy. By combining fluid flow and seismicity simulations we have created a numerical model for investigating induced seismicity over large time periods so that we might examine the role of operational and geological factors in seismogenesis around a sub-surface fluid injection. In our model, fluid injection is simulated using pore fluid movement throughout a permeable layer from a high-pressure point source using a lattice Boltzmann scheme. We can accommodate complicated geological structures in our simulations. Seismicity is modelled using a quasi-dynamic relationship between stress and slip coupled with a rate-and state friction law. By spatially varying the frictional parameters, the model can reproduce both seismic and aseismic slip. Static stress perturbations (due to either to fault cells slipping or fluid injection) are calculated using analytical solutions for slip dislocations/pressure changes in an elastic half space. An adaptive time step is used in order to increase computational efficiency and thus allow us to model hundreds of years of seismicity. As a case study, we investigate the role that relative fault - injection location plays in seismic activity. To do this we created three synthetic catalogues with only the relative location of the fault from the point of injection varying between the models. In our control model there is no injection meaning it contains only tectonically triggered events. In the other two catalogues, the injection site is placed below and adjacent to the fault respectively. The injection itself is into a permeable thin planar layer

  8. A Test Model of Water Pressures within a Fault in Rock Slope

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces model test results of water pressure in a fault, which is located in a slope and 16 different conditions. The results show that the water pressures in fault can be expressed by a linear function, which is similar to the theoretical model suggested by Hoek. Factors affecting water pressures are water level in tension crack, dip angle of fault, the height of filling materials and thickness of fault zone in sequence.

  9. The RR interval spectrum, the ECG signal and aliasing

    CERN Document Server

    Gersten, A; Ronen, A; Cassuto, Y

    1999-01-01

    A reliable spectral analysis requires sampling rate at least twice as large as the frequency bound, otherwise the analysis will be unreliable and plagued with aliasing distortions. The RR samplings do not satisfy the above requirements and therefore their spectral analysis might be unreliable. In order to demonstrate the feasibility of aliasing in RR spectral analysis, we have done an experiment which have shown clearly how the aliasing was developed. In the experiments, one of us (A.G) had kept his high breathing rate constant with the aid of metronome for more than 5 minutes. The breathing rate was larger than one-half the heart rate. Very accurate results were obtained and the resulting aliasing well understood. To our best knowledge this is the first controlled experiment of this kind coducted on humans. We compared the RR spectral analysis with the spectrum of the ECG signals from which the RR intervals were extracted. In the significant for RR analysis frequencies (below one-half Hertz) significant diff...

  10. Anti-aliasing weighting functions for multislice helical CT

    Science.gov (United States)

    La Riviere, Patrick J.; Pan, Xiaochuan

    2002-05-01

    We develop a new projection weighting function for interpolation and reconstruction of multi-slice helical computed tomography data with the hope of reducing longitudinal aliasing in reconstructed volumes. The weighting function is based on the application of the Papoulis generalized sampling theorem to the interlaced longitudinal samples acquired by the multi-slice scanner. We call the approach 180MAA, for multi-slice anti-aliasing. For pitch 3, the 180MAA approach yields high-quality images of the 3D Shepp-Logan phantom as well as a longitudinal MTF superior to that of the 180MLI approach, which is based on the use of linear interpolation. However, it is not as successful at mitigating aliasing as had been doped due to the presence of a significant and unexpected aliasing component that can be attributed to the small cone angle in multi-slice helical CT. The presence of this effect is interesting and significant in its own right, however.

  11. An Anti-aliasing Algorithm Suitable to Map Publishing Symbol

    Institute of Scientific and Technical Information of China (English)

    DENG Shujun

    2006-01-01

    On the basis of analysis of various algorithms, an anti-aliasing algorithm called brush method was presented, which is suitable to map publishing symbol. After introducing the basic principle and implementation of brush method in detail, the result and efficiency were evaluated through experiments.

  12. Sandbox Modeling of the Fault-increment Pattern in Extensional Basins

    Institute of Scientific and Technical Information of China (English)

    Geng Changbo; Tong Hengmao; He Yudan; Wei Chunguang

    2007-01-01

    Three series of sandbox modeling experiments were performed to study the fault-increment pattern in extensional basins.Experimental results showed that the tectonic action mode of boundaries and the shape of major boundary faults control the formation and evolution of faults in extensional basins.In the process of extensional deformation,the increase in the number and length of faults was episodic,and every 'episode' experienced three periods,strain-accumulation period,quick fault-increment period and strain-adjustment period.The more complex the shape of the boundary fault,the higher the strain increment each 'episode' experienced.Different extensional modes resulted in different fault-increment patterns.The horizontal detachment extensional mode has the 'linear' style of fault-increment pattern,while the extensional mode controlled by a listric fault has the 'stepwise' style of fault-increment pattern,and the extensional mode controlled by a ramp-flat boundary fault has the 'stepwise-linear' style of fault-increment pattern.These fault-increment patterns given above could provide a theoretical method of fault interpretation and fracture prediction in extensional basins.

  13. A new conceptual model for damage zone evolution with fault growth

    Science.gov (United States)

    de Joussineau, G.; Aydin, A.

    2006-12-01

    Faults may either impede or enhance fluid flow in the subsurface, which is relevant to a number of economic issues (hydrocarbon migration and entrapment, formation and distribution of mineral deposits) and environmental problems (movement of contaminants). Fault zones typically comprise a low-permeability core made up of intensely deformed fault rock and a high-permeability damage zone defined by fault-related fractures. The geometry, petrophysical properties and continuity of both the fault core and the damage zone have an important influence on the mechanical properties of the fault systems and on subsurface fluid flow. Information about fault components from remote seismic methods is limited and is available only for large faults (slip larger than 20-100m). It is therefore essential to characterize faults and associated damage zones in field analogues, and to develop conceptual models of how faults and related structures form and evolve. Here we present such an attempt to better understand the evolution of fault damage zones in the Jurassic Aztec Sandstone of the Valley of Fire State Park (SE Nevada). We document the formation and evolution of the damage zone associated with strike-slip faults through detailed field studies of faults of increasing slip magnitudes. The faults initiate as sheared joints with discontinuous pockets of damage zone located at fault tips and fault surface irregularities. With increasing slip (slip >5m), the damage zone becomes longer and wider by progressive fracture infilling, and is organized into two distinct components with different geometrical and statistical characteristics. The first component of the damage zone is the inner damage zone, directly flanking the fault core, with a relatively high fracture frequency and a thickness that scales with the amount of fault slip. Parts of this inner zone are integrated into the fault core by the development of the fault rock, contributing to the core's progressive widening. The second

  14. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After

  15. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  16. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  17. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  18. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  19. Faults simulations for three-dimensional reservoir-geomechanical models with the extended finite element method

    Science.gov (United States)

    Prévost, Jean H.; Sukumar, N.

    2016-01-01

    Faults are geological entities with thicknesses several orders of magnitude smaller than the grid blocks typically used to discretize reservoir and/or over-under-burden geological formations. Introducing faults in a complex reservoir and/or geomechanical mesh therefore poses significant meshing difficulties. In this paper, we consider the strong-coupling of solid displacement and fluid pressure in a three-dimensional poro-mechanical (reservoir-geomechanical) model. We introduce faults in the mesh without meshing them explicitly, by using the extended finite element method (X-FEM) in which the nodes whose basis function support intersects the fault are enriched within the framework of partition of unity. For the geomechanics, the fault is treated as an internal displacement discontinuity that allows slipping to occur using a Mohr-Coulomb type criterion. For the reservoir, the fault is either an internal fluid flow conduit that allows fluid flow in the fault as well as to enter/leave the fault or is a barrier to flow (sealing fault). For internal fluid flow conduits, the continuous fluid pressure approximation admits a discontinuity in its normal derivative across the fault, whereas for an impermeable fault, the pressure approximation is discontinuous across the fault. Equal-order displacement and pressure approximations are used. Two- and three-dimensional benchmark computations are presented to verify the accuracy of the approach, and simulations are presented that reveal the influence of the rate of loading on the activation of faults.

  20. A Fault Evolution Model Including the Rupture Dynamic Simulation

    Science.gov (United States)

    Wu, Y.; Chen, X.

    2011-12-01

    tip keeps the rupture continuing easily. Therefore, comparing with the current simulation, we expect a different stress evolution after a large earthquake in a short time scale, which is very essential for the short-term prediction. Once the model is successfully constructed, we intend to apply it to the San Andreas Fault at Parkfield segment. We try to simulate the seismicity evolution and the distribution of coseismic and postseismic slip and interseismic creep in the past decades. We expect to reproduce some specific events and slip distributions.

  1. Mechanical Modeling of Near-Fault Deformation Within the Dragon's Back Pressure Ridge, San Andreas Fault, Carrizo Plain, California

    Science.gov (United States)

    Hilley, G. E.; Arrowsmith, R.

    2011-12-01

    This contribution uses field observations and numerical modeling to understand how slip along the variably oriented fault surfaces in the upper few km of the San Andreas Fault (SAF) zone produces near-fault deformation observed within a 4.5-km-long Dragon's Back Pressure Ridge (DBPR) in the Carrizo Plain, central California. Geologic and geomorphic mapping of this feature indicates that the amplitude of monoclinal warping of Quaternary sediments increases from southeast to northwest along the southwestern third of the DBPR, and remains approximately constant throughout the remaining two thirds of the landform. When viewed with other structural observations and limited near-surface magnetotelluric imaging, these geologic observations are most compatible with a scenario in which shallow offset of the SAF to the northeast creates a structural knuckle that is anchored to the North American plate. Thus, deformation accrues as right-lateral strike-slip motion along the SAF moves this obstruction along the fault plane through the DBPR block. We have used the Gale numerical model to simulate deformation expected for geometries similar to those inferred within the vicinity of the DBPR. This is accomplished by relating stresses and strains in the upper crust according to a Drucker-Prager (plastic yielding) constitutive rule. Deformation in the model is driven by applying 35 mm/yr of right-lateral strike-slip motion to the model boundary; this displacement rate is likewise applied to the base of the model. The model geometry of the SAF at the beginning of the loading was fashioned to produce the discontinuity in the geometry of the fault plane that is inferred from field observations. The friction and cohesion of crust on each side of the fault were changed between models to determine the parameter values that preserve the structural discontinuity along the SAF as finite deformation accrued. The structural discontinuity over the ~4.5 km of model displacement is maintained in

  2. Anti-aliasing filters for deriving high-accuracy DEMs from TLS data: A case study from Freeport, Texas

    Science.gov (United States)

    Xiong, Lin.; Wang, Guoquan; Wessel, Paul

    2017-03-01

    Terrestrial laser scanning (TLS), also known as ground-based Light Detection and Ranging (LiDAR), has been frequently applied to build bare-earth digital elevation models (DEMs) for high-accuracy geomorphology studies. The point clouds acquired from TLS often achieve a spatial resolution at fingerprint (e.g., 3 cm×3 cm) to handprint (e.g., 10 cm×10 cm) level. A downsampling process has to be applied to decimate the massive point clouds and obtain manageable DEMs. It is well known that downsampling can result in aliasing that causes different signal components to become indistinguishable when the signal is reconstructed from the datasets with a lower sampling rate. Conventional DEMs are mainly the results of upsampling of sparse elevation measurements from land surveying, satellite remote sensing, and aerial photography. As a consequence, the effects of aliasing caused by downsampling have not been fully investigated in the open literature of DEMs. This study aims to investigate the spatial aliasing problem of regridding dense TLS data. The TLS data collected from the beach and dune area near Freeport, Texas in the summer of 2015 are used for this study. The core idea of the anti-aliasing procedure is to apply a low-pass spatial filter prior to conducting downsampling. This article describes the successful use of a fourth-order Butterworth low-pass spatial filter employed in the Generic Mapping Tools (GMT) software package as an anti-aliasing filter. The filter can be applied as an isotropic filter with a single cutoff wavelength or as an anisotropic filter with two different cutoff wavelengths in the X and Y directions. The cutoff wavelength for the isotropic filter is recommended to be three times the grid size of the target DEM.

  3. Dynamic X-Y Crosstalk / Aliasing Errors of Multiplexing BPMs

    Energy Technology Data Exchange (ETDEWEB)

    Straumann, T.; /SLAC

    2005-08-09

    Multiplexing Beam Position Monitors are widely used for their simplicity and inherent drift cancellation property. These systems successively feed the signals of (typically four) RF pickups through one single detector channel. The beam position is calculated from the demultiplexed base band signal. However, as shown below, transverse beam motion results in positional aliasing errors due to the finite multiplexing frequency. Fast vertical motion, for example, can alias into an apparent, slow horizontal position change.

  4. Fault detection and identification based on combining logic and model in a wall-climbing robot

    Institute of Scientific and Technical Information of China (English)

    Yong JIANG; Hongguang WANG; Lijin FANG; Mingyang ZHAO

    2009-01-01

    A combined logic- and model-based approach to fault detection and identification (FDI) in a suction foot control system of a wall-climbing robot is presented in this paper. For the control system, some fault models are derived by kinematics analysis. Moreover, the logic relations of the system states are known in advance. First, a fault tree is used to analyze the system by evaluating the basic events (elementary causes), which can lead to a root event (a particular fault). Then, a multiple-model adaptive estimation algorithm is used to detect and identify the model-known faults. Finally, based on the system states of the robot and the results of the estimation, the model-unknown faults are also identified using logical reasoning. Experiments show that the proposed approach based on the combination of logical reasoning and model estimating is efficient in the FDI of the robot.

  5. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    Data.gov (United States)

    National Aeronautics and Space Administration — Sensor faults continue to be a major hurdle for sys- tems health management to reach its full potential. At the same time, few recorded instances of sensor faults...

  6. Out-of-Bounds Array Access Fault Model and Automatic Testing Method Study

    Institute of Scientific and Technical Information of China (English)

    GAO Chuanping; DUAN Miyi; TAN Liqun; GONG Yunzhan

    2007-01-01

    Out-of-bounds array access(OOB) is one of the fault models commonly employed in the objectoriented programming language. At present, the technology of code insertion and optimization is widely used in the world to detect and fix this kind of fault. Although this method can examine some of the faults in OOB programs, it cannot test programs thoroughly, neither to find the faults correctly. The way of code insertion makes the test procedures so inefficient that the test becomes costly and time-consuming. This paper, uses a kind of special static test technology to realize the fault detection in OOB programs. We first establish the fault models in OOB program, and then develop an automatic test tool to detect the faults. Some experiments have exercised and the results show that the method proposed in the paper is efficient and feasible in practical applications.

  7. Altimetric sampling and mapping procedures induce spatial and temporal aliasing of the signal – characteristics of these aliasing effects in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Briol

    2007-07-01

    Full Text Available This study deals with spatial and temporal aliasing of the sea surface signal and its restitution with altimetric maps of Sea Level Anomalies (SLA in the Mediterranean Sea. Spatial and temporal altimetry sampling, combined with a mapping process, are unable to restore high-frequency (HF surface variability. In the Mediterranean Sea, it has been shown that signals whose intervals are less than 30–40 days are largely underestimated, and the residual HF restitution signal contains characteristic errors which make it possible to identify the spatial and temporal sampling of each satellite. The origin of these errors is relatively complex. Three main effects are involved: the sampling of the HF long-wavelength (LW signal, the correction of this signal's aliasing and the mapping procedure. – The sampling depends on the characteristics of the satellites considered, but generally induces inter-track bias that needs to be corrected before the mapping procedure is applied. – Correcting the aliasing of the HF LW signal, carried out using a barotropic model output and/or an empirical method, is not perfect. In fact, the baroclinic part of the HF LW signal is neglected and the numerical model's capabilities are limited by the spatial resolution of the model and the forcing. The empirical method cannot precisely control the corrected signal. – The mapping process, which is optimised to improve restitution of mesoscale activity, does not propagate the LW signal far from the satellite tracks. Even though these residual errors are very low with respect to the total signal, their signature may be visible on maps of SLAs. However, these errors can be corrected by more careful consideration of their characteristics in terms of spatial distribution induced by altimetric along-track sampling. They can also be attenuated by increasing the altimetric spatial coverage through the merging of different satellites. Ultimately, the HF signal, which is missing in

  8. Fault Estimation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems.......This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...

  9. Testing fault growth models with low-temperature thermochronology in the northwest Basin and Range, USA

    Science.gov (United States)

    Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.

    2016-10-01

    Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2-0.4 km/Myr, ultimately exhuming approximately 1.5-5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3-4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.

  10. Integrating fault and seismological data into a probabilistic seismic hazard model for Italy.

    Science.gov (United States)

    Valentini, Alessandro; Visini, Francesco; Pace, Bruno

    2017-04-01

    We present the results of new probabilistic seismic hazard analysis (PSHA) for Italy based on active fault and seismological data. Combining seismic hazard from active fault with distributed seismic sources (where there are no data on active faults) is the backbone of this work. Far away from identifying a best procedure, currently adopted approaches combine active faults and background sources applying a threshold magnitude, generally between 5.5 and 7, over which seismicity is modelled by faults, and under which is modelled by distributed sources or area sources. In our PSHA we (i) apply a new method for the treatment of geologic data of major active faults and (ii) propose a new approach to combine these data with historical seismicity to evaluate PSHA for Italy. Assuming that deformation is concentrated in correspondence of fault, we combine the earthquakes occurrences derived from the geometry and slip rates of the active faults with the earthquakes from the spatially smoothed earthquake sources. In the vicinity of an active fault, the smoothed seismic activity is gradually reduced by a fault-size driven factor. Even if the range and gross spatial distribution of expected accelerations obtained in our work are comparable to the ones obtained through methods applying seismic catalogues and classical zonation models, the main difference is in the detailed spatial pattern of our PSHA model: our model is characterized by spots of more hazardous area, in correspondence of mapped active faults, while the previous models give expected accelerations almost uniformly distributed in large regions. Finally, we investigate the impact due to the earthquake rates derived from two magnitude-frequency distribution (MFD) model for faults on the hazard result and in respect to the contribution of faults versus distributed seismic activity.

  11. Markov Modeling of Component Fault Growth Over A Derived Domain of Feasible Output Control Effort Modifications

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of...

  12. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  13. GPS DATA INVERSION OF KINEMATIC MODEL OF MAIN FAULTS IN YUNNAN

    Institute of Scientific and Technical Information of China (English)

    ShenChongyang; WuYun; WangQi; YouXinzhao; QiaoXuejun

    2003-01-01

    On the basis of GPS observations in Yunnan from 1999 to 2001, we adopt the robust Bayesian least square estimation and multi-fault dislocation model to analyze the quantitative kinematics models of the main faults in Yunnan. The geodetic inversion suggests that, (1) The horizontal movement of crust in Yunnan is affected distinctly by fault activity whose characters are consistent with geological results; (2) The activity of the north segment of the Red River fault zone is maximum, in the middle segment is moderate, and in the south segment is minimum; (3)Among others, the Xiaojiang fault zone has the strongest activity, the secondary are the Lancang fault zone and the north segment of Nujiang fault zone, the Qujiang fault zone shows the characteristic of hinge fault; (4)Each fault could produce an earthquake of Ms=6 more or less per year; (5) The larger value of maximum shear strain are mostly located along the main active fault zones and their intersections; earthquakes did not occur at the place of maximum shear strain, and mostly take place at the higher gradient zones, especially at its corner.

  14. GPS DATA INVERSION OF KINEMATIC MODEL OF MAIN FAULTS IN YUNNAN

    Institute of Scientific and Technical Information of China (English)

    Shen Chongyang; Wu Yun; Wang Qi; You Xinzhao; Qiao Xuejun

    2003-01-01

    On the basis of GPS observations in Yunnan from 1999 to 2001, we adopt the robust Bayesian least square estimation and multi-fault dislocation model to analyze the quantitative kinematics models of the main faults in Yunnan. The geodetic inversion suggests that: (1) The horizontal movement of crust in Yunnan is affected distinctly by fault activity whose characters are consistent with geological results; (2) The activity of the north segment of the Red River fault zone is maximum, in the middle segment is moderate, and in the south segment is minimum; (3)Among others, the Xiaojiang fault zone has the strongest activity, the secondary are the Lancang fault zone and the north segment of Nujiang fault zone, the Qujiang fault zone shows the characteristic of hinge fault; (4)Each fault could produce an earthquake of Ms=6 more or less per year; (5) The larger value of maximum shear strain are mostly located along the main active fault zones and their intersections; earthquakes did not occur at the place of maximum shear strain, and mostly take place at the higher gradient zones, especially at its corner.

  15. An approach to 3D NURBS modeling of complex fault network considering its historic tectonics

    Institute of Scientific and Technical Information of China (English)

    ZHONG Denghua; LIU Jie; LI Mingchao

    2006-01-01

    Fault disposal is a research area that presents difficulties in 3D geological modeling and visualization. In this paper, we propose an integrated approach to reconstructing a complex fault network (CFN). Based on the non-uniform rational B-spline (NURBS)techniques, fault surface was constructed, reflecting the regulation of its spatial tendency, and correlative surfaces were enclosed to form a fault body model. Based on these models and considering their historic tectonics, a method was put forward to settle the 3D modeling problem when the intersection of two faults in CFN induced the change of their relative positions. First, according to the relationships of intersection obtained from geological interpretation, we introduced the topological sort to determine the order of fault body construction and rebuilt fault bodies in terms of the order; then, with the disposal method of two intersectant faults in 3D modeling and applying the Boolean operation, we investigated the characteristic of faults at the intersectant part. An example of its application in hydropower engineering project was proposed. Its results show that this modeling approach can increase the computing efficiency while less computer memory is required, and it can also factually and objectively reproduce the CFN in the engineering region, which establishes a theoretical basis for 3D modeling and analysis of complex engineering geology.

  16. Surveillance system and method having an operating mode partitioned fault classification model

    Science.gov (United States)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  17. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...... diagnosis methods, often viewed as the classical or deterministic ones. Emphasis is placed on the algorithms suitable for ship automation, unmanned underwater vehicles, and other systems of automatic control....

  18. De-correlated combination of two low-low Satellite-to-Satellite tracking pairs according to temporal aliasing

    Science.gov (United States)

    Murböck, Michael; Pail, Roland

    2014-05-01

    The monitoring of the temporal changes in the Earth's gravity field is of great scientific and societal importance. Within several days a homogeneous global coverage of gravity observations can be obtained with satellite missions. Temporal aliasing of background model errors into global gravity field models will be one of the largest restrictions in future satellite temporal gravity recovery. The largest errors are due to high-frequent tidal and non-tidal atmospheric and oceanic mass variations. Having a double pair low-low Satellite-to-Satellite tracking (SST) scenario on different inclined orbits reduces temporal aliasing errors significantly. In general temporal aliasing effects for a single (-pair) mission strongly depend on the basic orbital rates (Murböck et al. 2013). These are the rates of the argument of the latitude and of the longitude of the ascending node. This means that the revolution time and the length of one nodal day determine how large the temporal aliasing error effects are for each SH order. The combination of two low-low SST missions based on normal equations requires an adequate weighting of the two components. This weighting shall ensure the full de-correlation of each of the two parts. Therefore it is necessary to take the temporal aliasing errors into account. In this study it is analyzed how this can be done based on the resonance orders of the two orbits. Different levels of approximation are applied to the de-correlation approach. The results of several numerical closed-loop simulations are shown including stochastic modeling of realistic future instrument noise. It is shown that this de-correlation approach is important for maximizing the benefit of a double-pair low-low SST mission for temporal gravity recovery. Murböck M, Pail R, Daras I and Gruber T (2013) Optimal orbits for temporal gravity recovery regarding temporal aliasing. Journal of Geodesy, Springer Berlin Heidelberg, ISSN 0949-7714, DOI 10.1007/s00190-013-0671-y

  19. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Mittempergher, Silvia; Vho, Alice; Bistacchi, Andrea

    2016-04-01

    A quantitative analysis of fault-rock distribution in outcrops of exhumed fault zones is of fundamental importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation. We present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM), developed on the Gole Larghe Fault Zone (GLFZ), a well exposed strike-slip fault in the Adamello batholith (Italian Southern Alps). The GLFZ has been exhumed from ca. 8-10 km depth, and consists of hundreds of individual seismogenic slip surfaces lined by green cataclasites (crushed wall rocks cemented by the hydrothermal epidote and K-feldspar) and black pseudotachylytes (solidified frictional melts, considered as a marker for seismic slip). A digital model of selected outcrop exposures was reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs processed with VisualSFM software. The resulting DOM has a resolution up to 0.2 mm/pixel. Most of the outcrop was imaged using images each one covering a 1 x 1 m2 area, while selected structural features, such as sidewall ripouts or stepovers, were covered with higher-resolution images covering 30 x 40 cm2 areas.Image processing algorithms were preliminarily tested using the ImageJ-Fiji package, then a workflow in Matlab was developed to process a large collection of images sequentially. Particularly in detailed 30 x 40 cm images, cataclasites and hydrothermal veins were successfully identified using spectral analysis in RGB and HSV color spaces. This allows mapping the network of cataclasites and veins which provided the pathway for hydrothermal fluid circulation, and also the volume of mineralization, since we are able to measure the thickness of cataclasites and veins on the outcrop surface. The spectral signature of pseudotachylyte veins is indistinguishable from that of biotite grains in the wall rock (tonalite), so we tested morphological analysis tools to discriminate

  20. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  1. Nearly frictionless faulting by unclamping in long-term interaction models

    Science.gov (United States)

    Parsons, T.

    2002-01-01

    In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.

  2. A parametric study of aliasing error for a narrow field of view scanning radiometer. [for the Earth Radiation Budget experiment

    Science.gov (United States)

    Halyo, N.; Stallman, S. T.

    1980-01-01

    Starting from the general measurement equation, it is shown that a NFOV scanner can be approximated by a spatially invariant system whose point spread function depends on the detector shape and angular characteristics and electrical filter transfer function for given patches at the top of the atmosphere. The radiometer is modeled by a detector, electrical filter, analog to digital converter followed by a reconstruction filter. The errors introduced by aliasing and blurring into a reconstruction of the input radiant exitance are modeled and analyzed for various detector shapes, sampling intervals, electrical filters and scan types. Quantitative results on the errors introduced are presented showing the various tradeoffs between design parameters. The results indicate that proper selection of detector shape coupled with electrical filter can reduce aliasing errors significantly.

  3. 3D Spontaneous Rupture Models of Large Earthquakes on the Hayward Fault, California

    Science.gov (United States)

    Barall, M.; Harris, R. A.; Simpson, R. W.

    2008-12-01

    We are constructing 3D spontaneous rupture computer simulations of large earthquakes on the Hayward and central Calaveras faults. The Hayward fault has a geologic history of producing many large earthquakes (Lienkaemper and Williams, 2007), with its most recent large event a M6.8 earthquake in 1868. Future large earthquakes on the Hayward fault are not only possible, but probable (WGCEP, 2008). Our numerical simulation efforts use information about the complex 3D fault geometry of the Hayward and Calaveras faults and information about the geology and physical properties of the rocks that surround the Hayward and Calaveras faults (Graymer et al., 2005). Initial stresses on the fault surface are inferred from geodetic observations (Schmidt et al., 2005), seismological studies (Hardebeck and Aron, 2008), and from rate-and- state simulations of the interseismic interval (Stuart et al., 2008). In addition, friction properties on the fault surface are inferred from laboratory measurements of adjacent rock types (Morrow et al., 2008). We incorporate these details into forward 3D computer simulations of dynamic rupture propagation, using the FaultMod finite-element code (Barall, 2008). The 3D fault geometry is constructed using a mesh-morphing technique, which starts with a vertical planar fault and then distorts the entire mesh to produce the desired fault geometry. We also employ a grid-doubling technique to create a variable-resolution mesh, with the smallest elements located in a thin layer surrounding the fault surface, which provides the higher resolution needed to model the frictional behavior of the fault. Our goals are to constrain estimates of the lateral and depth extent of future large Hayward earthquakes, and to explore how the behavior of large earthquakes may be affected by interseismic stress accumulation and aseismic slip.

  4. A fault tolerant model for multi-sensor measurement

    Directory of Open Access Journals (Sweden)

    Li Liang

    2015-06-01

    Full Text Available Multi-sensor systems are very powerful in the complex environments. The cointegration theory and the vector error correction model, the statistic methods which widely applied in economic analysis, are utilized to create a fitting model for homogeneous sensors measurements. An algorithm is applied to implement the model for error correction, in which the signal of any sensor can be estimated from those of others. The model divides a signal series into two parts, the training part and the estimated part. By comparing the estimated part with the actual one, the proposed method can identify a sensor with possible faults and repair its signal. With a small amount of training data, the right parameters for the model in real time could be found by the algorithm. When applied in data analysis for aero engine testing, the model works well. Therefore, it is not only an effective method to detect any sensor failure or abnormality, but also a useful approach to correct possible errors.

  5. Th gme 05: Modeling of fault reactivation and fault slip in producing gas fields

    NARCIS (Netherlands)

    Wassing, B.B.T.

    2015-01-01

    Current methods which are used for seismic hazard analyses of production induced seismicity in The Netherlands are generally based on either empirical relations which link compaction strain and seismic release or simple relations between available fault area and seismic moment release. Physics based

  6. Model-Based Fault Management Engineering Tool Suite Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's successful development of next generation space vehicles, habitats, and robotic systems will rely on effective Fault Management Engineering. Our proposed...

  7. Certain Type Turbofan Engine Whole Vibration Model with Support Looseness Fault and Casing Response Characteristics

    Directory of Open Access Journals (Sweden)

    H. F. Wang

    2014-01-01

    Full Text Available Support looseness fault is a type of common fault in aeroengine. Serious looseness fault would emerge under larger unbalanced force, which would cause excessive vibration and even lead to rubbing fault, so it is important to analyze and recognize looseness fault effectively. In this paper, based on certain type turbofan engine structural features, a rotor-support-casing whole model for certain type turbofan aeroengine is established. The rotor and casing systems are modeled by means of the finite element beam method; the support systems are modeled by lumped-mass model; the support looseness fault model is also introduced. The coupled system response is obtained by numerical integral method. In this paper, based on the casing acceleration signals, the impact characteristics of symmetrical stiffness and asymmetric stiffness models are analyzed, finding that the looseness fault would lead to the longitudinal asymmetrical characteristics of acceleration time domain wave and the multiple frequency characteristics, which is consistent with the real trial running vibration signals. Asymmetric stiffness looseness model is verified to be fit for aeroengine looseness fault model.

  8. A way to synchronize models with seismic faults for earthquake forecasting

    DEFF Research Database (Denmark)

    González, Á.; Gómez, J.B.; Vázquez-Prada, M.

    2006-01-01

    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual f...

  9. Fault detection and diagnosis in a food pasteurization process with Hidden Markov Models

    OpenAIRE

    Tokatlı, Figen; Cinar, Ali

    2004-01-01

    Hidden Markov Models (HMM) are used to detect abnormal operation of dynamic processes and diagnose sensor and actuator faults. The method is illustrated by monitoring the operation of a pasteurization plant and diagnosing causes of abnormal operation. Process data collected under the influence of faults of different magnitude and duration in sensors and actuators are used to illustrate the use of HMM in the detection and diagnosis of process faults. Case studies with experimental data from a ...

  10. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  11. A distributed model predictive control (MPC) fault reconfiguration strategy for formation flying satellites

    Science.gov (United States)

    Esfahani, N. R.; Khorasani, K.

    2016-05-01

    In this paper, an active distributed (also referred to as semi-decentralised) fault recovery control scheme is proposed that employs inaccurate and unreliable fault information into a model-predictive-control-based design. The objective is to compensate for the identified actuator faults that are subject to uncertainties and detection time delays, in the attitude control subsystems of formation flying satellites. The proposed distributed fault recovery scheme is developed through a two-level hierarchical framework. In the first level, or the agent level, the fault is recovered locally to maintain as much as possible the design specifications, feasibility, and tracking performance of all the agents. In the second level, or the formation level, the recovery is carried out by enhancing the entire team performance. The fault recovery performance of our proposed distributed (semi-decentralised) scheme is compared with two other alternative schemes, namely the centralised and the decentralised fault recovery schemes. It is shown that the distributed (semi-decentralised) fault recovery scheme satisfies the recovery design specifications and also imposes lower fault compensation control effort cost and communication bandwidth requirements as compared to the centralised scheme. Our proposed distributed (semi-decentralised) scheme also outperforms the achievable performance capabilities of the decentralised scheme. Simulation results corresponding to a network of four precision formation flight satellites are also provided to demonstrate and illustrate the advantages of our proposed distributed (semi-decentralised) fault recovery strategy.

  12. Rheology and friction along the Vema transform fault (Central Atlantic) inferred by thermal modeling

    Science.gov (United States)

    Cuffaro, Marco; Ligi, Marco

    2016-04-01

    We investigate with 3-D finite element simulations the temperature distribution beneath the Vema transform that offsets the Mid-Atlantic Ridge by ~300 km in the Central Atlantic. The developed thermal model includes the effects of mantle flow beneath a ridge-transform-ridge geometry and the lateral heat conduction across the transform fault, and of the shear heating generated along the fault. Numerical solutions are presented for a 3-D domain, discretized with a non-uniform tetrahedral mesh, where relative plate kinematics is used as boundary condition, providing passive mantle upwelling. Mantle is modelled as a temperature-dependent viscous fluid, and its dynamics can be described by Stokes and advection-conduction heat equations. The results show that shear heating raises significantly the temperature along the transform fault. In order to test model results, we calculated the thermal structure simulating the mantle dynamics beneath an accretionary plate boundary geometry that duplicates the Vema transform fault, assuming the present-day spreading rate and direction of the Mid Atlantic Ridge at 11 °N. Thus, the modelled heat flow at the surface has been compared with 23 heat flow measurements carried out along the Vema Transform valley. Laboratory studies on the frictional stability of olivine aggregates show that the depth extent of oceanic faulting is thermally controlled and limited by the 600 °C isotherm. The depth of isotherms of the thermal model were compared to the depths of earthquakes along transform faults. Slip on oceanic transform faults is primarily aseismic, only 15% of the tectonic offset is accommodated by earthquakes. Despite extensive fault areas, few large earthquakes occur on the fault and few aftershocks follow large events. Rheology constrained by the thermal model combined with geology and seismicity of the Vema Transform fault allows to better understand friction and the spatial distribution of strength along the fault and provides

  13. A way to synchronize models with seismic faults for earthquake forecasting: Insights from a simple stochastic model

    CERN Document Server

    González, A; Gómez, J B; Pacheco, A F; Gonzalez, Alvaro; Vazquez-Prada, Miguel; Gomez, Javier B.; Pacheco, Amalio F.

    2005-01-01

    Numerical models of seismic faults are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual fault or fault network it simulates (just as, for example, meteorologists synchronize their models with the atmosphere by incorporating current atmospheric data in them). However, lithospheric dynamics is largely unobservable: important parameters cannot (or can rarely) be measured in Nature. Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the accurate synchronization of the models. The rupture area is one of the measurable parameters of actual earthquakes. Here we explore how this can be used to at least synchronize fault models between themselves and forecast synthetic earthquakes. Our purpose here is to forec...

  14. Diesel Engine Actuator Fault Isolation using Multiple Models Hypothesis Tests

    DEFF Research Database (Denmark)

    Bøgh, S.A.

    1994-01-01

    Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic......Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic...

  15. UML Statechart Fault Tree Generation By Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Herbert-Hansen, Zaza Nadja Lee

    Creating fault tolerant and efficient process work-flows poses a significant challenge. Individual faults, defined as an abnormal conditions or defects in a component, equipment, or sub-process, must be handled so that the system may continue to operate, and are typically addressed by implementin...

  16. A Complete Analytic Model for Fault Diagnosis of Power Systems

    Institute of Scientific and Technical Information of China (English)

    LIU Daobing; GU Xueping; LI Haipeng

    2011-01-01

    Interconnections of the modem bulk electric power systems, while contributing to the operating economy and reliability by means of mutual assistance between the subsystems, result in an increased complexity of fault diagnosis and a more serious consequence of misdiagnosis. The online fault diagnosis has become a more challenging problem for dispatchers to operate a power system securely,

  17. Fault detection in processes represented by PLS models using an EWMA control scheme

    KAUST Repository

    Harrou, Fouzi

    2016-10-20

    Fault detection is important for effective and safe process operation. Partial least squares (PLS) has been used successfully in fault detection for multivariate processes with highly correlated variables. However, the conventional PLS-based detection metrics, such as the Hotelling\\'s T and the Q statistics are not well suited to detect small faults because they only use information about the process in the most recent observation. Exponentially weighed moving average (EWMA), however, has been shown to be more sensitive to small shifts in the mean of process variables. In this paper, a PLS-based EWMA fault detection method is proposed for monitoring processes represented by PLS models. The performance of the proposed method is compared with that of the traditional PLS-based fault detection method through a simulated example involving various fault scenarios that could be encountered in real processes. The simulation results clearly show the effectiveness of the proposed method over the conventional PLS method.

  18. Application of H-Infinity Fault Detection to Model-Scale Autonomous Aircraft

    Science.gov (United States)

    Vasconcelos, J. F.; Rosa, P.; Kerr, Murray; Latorre Sierra, Antonio; Recupero, Cristina; Hernandez, Lucia

    2015-09-01

    This paper describes the development of a fault detection system for a model scale autonomous aircraft. The considered fault scenario is defined by malfunctions in the elevator, namely bias and stuck-in-place of the surface. The H∞ design methodology is adopted, with an LFT description of the aircraft longitudinal dynamics, that allows for fault detection explicitly synthesized for a wide range of operating airspeeds. The obtained filter is validated in two stages: in a Functional Engineering Simulator (FES), providing preliminary results of the filter performance; and with experimental data, collected in field tests with actual injection of faults in the elevator surface.

  19. Improved wavelet analysis for induction motors mixed-fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hanlei; ZHOU Jiemin; LI Gang

    2007-01-01

    Eccentricity is one of the frequent faults of induction motors,and it may cause rub between the rotor and the stator.Early detection of significant rub from pure eccentricity can prolong the lifespan of induction motors.This paper is devoted to such mixed-fault diagnosis:eccentricity plus rub fault.The continuous wavelet transform(CWT)is employed to analyze vibration signals obtained from the motor body.An improved continuous wavelet trartsform was proposed to alleviate the frequency aliasing.Experimental results show that the proposed method can effectively distinguish two types of faults,single-fault of eccentricity and mixed-fault of eccentricity plus rub.

  20. Modeling and Fault Monitoring of Bioprocess Using Generalized Additive Models (GAMs) and Bootstrap%Modeling and Fault Monitoring of Bioprocess Using Generalized Additive Models (GAMs) and Bootstrap

    Institute of Scientific and Technical Information of China (English)

    郑蓉建; 周林成; 潘丰

    2012-01-01

    Fault monitoring of bioprocess is important to ensure safety of a reactor and maintain high quality of products. It is difficult to build an accurate mechanistic model for a bioprocess, so fault monitoring based on rich historical or online database is an effective way. A group of data based on bootstrap method could be resampling stochastically, improving generalization capability of model. In this paper, online fault monitoring of generalized additive models (GAMs) combining with bootstrap is proposed for glutamate fermentation process. GAMs and bootstrap are first used to decide confidence interval based on the online and off-line normal sampled data from glutamate fermentation experiments. Then GAMs are used to online fault monitoring for time, dissolved oxygen, oxygen uptake rate, and carbon dioxide evolution rate. The method can provide accurate fault alarm online and is helpful to provide useful information for removing fault and abnormal phenomena in the fermentation.

  1. Combining field observations and numerical modeling to better understand fault opening and hydromechanics at depth

    Science.gov (United States)

    Ritz, E.; Pollard, D. D.

    2012-12-01

    This study adds field observations and numerical modeling results to the mounting evidence that fault surface irregularities cause local variations in slip, opening, and stress distributions along faults. A two-dimensional displacement discontinuity boundary element model (DDM) in conjunction with a complementarity algorithm is used to model both idealized and natural fault geometries in order to predict the locations and magnitudes of fault opening, and the style and spatial distribution of off-fault damage, both of which influence local fluid flow. Field observations of exhumed small faults in granodiorite from the central Sierra Nevada, California, help to constrain the numerical models. The Sierran faults exhibit sections of opening that became conduits for fluid flow and void spaces for precipitation of hydrothermal minerals; these sections are often surrounded by fractured and altered wall rock, presumably due to local stress concentrations and the influx of chemically reactive fluids. We are further developing the DDM with complementarity to add internal fluid pressure or normal cohesion along the fault surfaces, which are assigned independently of other contact properties, such as the frictional strength and coefficient of friction. While variable frictional strength or internal normal stress along a planar fault may produce opening or perturb the local stress state, these boundary conditions do not accurately mimic the mechanical behavior of faults with non-planar geometries. We advocate using the nomenclature 'lee' and 'stoss' to describe curved faults rather than 'releasing' and 'restraining bends', because the implied mechanical conditions are not necessarily met. Numerical experiments for idealized curved model faults demonstrate that fault opening can occur along lee sides of the curves, with the enhancing effects of fluid pressure and despite the countervailing effects of increased confining pressure with depth. Ambient effective compressive stresses

  2. Degradation Assessment and Fault Diagnosis for Roller Bearing Based on AR Model and Fuzzy Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Lingli Jiang

    2011-01-01

    Full Text Available This paper proposes a new approach combining autoregressive (AR model and fuzzy cluster analysis for bearing fault diagnosis and degradation assessment. AR model is an effective approach to extract the fault feature, and is generally applied to stationary signals. However, the fault vibration signals of a roller bearing are non-stationary and non-Gaussian. Aiming at this problem, the set of parameters of the AR model is estimated based on higher-order cumulants. Consequently, the AR parameters are taken as the feature vectors, and fuzzy cluster analysis is applied to perform classification and pattern recognition. Experiments analysis results show that the proposed method can be used to identify various types and severities of fault bearings. This study is significant for non-stationary and non-Gaussian signal analysis, fault diagnosis and degradation assessment.

  3. Time variable gravity retrieval and treatment of temporal aliasing using optical two-way links between GALILEO and LEO satellites

    Science.gov (United States)

    Hauk, Markus; Pail, Roland; Murböck, Michael; Schlicht, Anja

    2016-04-01

    For the determination of temporal gravity fields satellite missions such as GRACE (Gravity Recovery and Climate Experiment) or CHAMP (Challenging Minisatellite Payload) were used in the last decade. These missions improved the knowledge of atmospheric, oceanic and tidal mass variations. The most limiting factor of temporal gravity retrieval quality is temporal aliasing due to the undersampling of high frequency signals, especially in the atmosphere and oceans. This kind of error causes the typical stripes in spatial representations of global gravity fields such as from GRACE. As part of the GETRIS (Geodesy and Time Reference in Space) mission, that aims to establish a geodetic reference station and precise time- and frequency reference in space by using optical two-way communication links between geostationary (GEO) and low Earth orbiting (LEO) satellites, a possible future gravity field mission can be set up. By expanding the GETRIS space segment to the global satellite navigation systems (GNSS) the optical two-way links also connect the GALILEO satellites among themselves and to LEO satellites. From these links between GALILEO and LEO satellites gravitational information can be extracted. In our simulations inter-satellite links between GALILEO and LEO satellites are used to determine temporal changes in the Earth's gravitational field. One of the main goals of this work is to find a suitable constellation together with the best analysis method to reduce temporal aliasing errors. Concerning non-tidal aliasing, it could be shown that the co-estimation of short-period long-wavelength gravity field signals, the so-called Wiese approach, is a powerful method for aliasing reduction (Wiese et al. 2013). By means of a closed loop mission simulator using inter-satellite observations as acceleration differences along the line-of-sight, different mission scenarios for GALILEO-LEO inter-satellite links and different functional models like the Wiese approach are analysed.

  4. Analytical Model and Algorithm of Fuzzy Fault Tree

    Institute of Scientific and Technical Information of China (English)

    杨艺; 何学秋; 王恩元; 刘贞堂

    2002-01-01

    In the past, the probabilities of basic events were described as triangular or trapezoidal fuzzy number that cannot characterize the common distribution of the primary events in engineering, and the fault tree analyzed by fuzzy set theory did not include repeated basic events. This paper presents a new method to a nalyze the fault tree by using normal fuzzy number to describe the fuzzy probability of each basic event which is more suitably used to analyze the reliability in safety systems, and then the formulae of computing the fuzzy probability of the top event of the fault tree which includes repeated events are derived. Finally, an example is given.

  5. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    Efficient fault detection in generators often require prior knowledge of fault behavior, which can be obtained from theoretical analysis, often carried out by using discrete models of a given generator. Mathematical models are commonly represented in the DQ0 reference frame, which is convenient...... in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...... as undesired spectral components, which can be detected by applying frequency spectrum analysis....

  6. Fault creep and strain partitioning in Trinidad-Tobago: Geodetic measurements, models, and origin of creep

    Science.gov (United States)

    Geirsson, Halldór; Weber, John; La Femina, Peter; Latchman, Joan L.; Robertson, Richard; Higgins, Machel; Miller, Keith; Churches, Chris; Shaw, Kenton

    2017-04-01

    We studied active faults in Trinidad and Tobago in the Caribbean-South American (CA-SA) transform plate boundary zone using episodic GPS (eGPS) data from 19 sites and continuous GPS (cGPS) data from 8 sites, then modeling these data using a series of simple screw dislocation models. Our best-fit model for interseismic fault slip requires: 12-15 mm/yr of right-lateral movement and very shallow locking (0.2 ± 0.2 km; essentially creep) across the Central Range Fault (CRF); 3.4 +0.3/-0.2 mm/yr across the Soldado Fault in south Trinidad, and 3.5 +0.3/-0.2 mm/yr of dextral shear on fault(s) between Trinidad and Tobago. The upper-crustal faults in Trinidad show very little seismicity (1954-current from local network) and do not appear to have generated significant historic earthquakes. However, paleoseismic studies indicate that the CRF ruptured between 2710 and 500 yr. B.P. and thus it was recently capable of storing elastic strain. Together, these data suggest spatial and/or temporal fault segmentation on the CRF. The CRF marks a physical boundary between rocks associated with thermogenically generated petroleum and overpressured fluids in south and central Trinidad, from rocks containing only biogenic gas to the north, and a long string of active mud volcanoes align with the trace of the Soldado Fault along Trinidad's south coast. Fluid (oil and gas) overpressure may thus cause the CRF fault creep that we observe and the lack of seismicity, as an alternative or addition to weak mineral phases on the fault.

  7. Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants

    Science.gov (United States)

    Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.

    1998-01-01

    Control of air contaminants is a crucial factor in the safety considerations of crewed space flight. Indoor air quality needs to be closely monitored during long range missions such as a Mars mission, and also on large complex space structures such as the International Space Station. This work mainly pertains to the detection and simulation of air contaminants in the space station, though much of the work is easily extended to buildings, and issues of ventilation systems. Here we propose a method with which to track the presence of contaminants using an accurate physical model, and also develop a robust procedure that would raise alarms when certain tolerance levels are exceeded. A part of this research concerns the modeling of air flow inside a spacecraft, and the consequent dispersal pattern of contaminants. Our objective is to also monitor the contaminants on-line, so we develop a state estimation procedure that makes use of the measurements from a sensor system and determines an optimal estimate of the contamination in the system as a function of time and space. The real-time optimal estimates in turn are used to detect faults in the system and also offer diagnoses as to their sources. This work is concerned with the monitoring of air contaminants aboard future generation spacecraft and seeks to satisfy NASA's requirements as outlined in their Strategic Plan document (Technology Development Requirements, 1996).

  8. Modeling and Fault Diagnosis of Interturn Short Circuit for Five-Phase Permanent Magnet Synchronous Motor

    Directory of Open Access Journals (Sweden)

    Jian-wei Yang

    2015-01-01

    Full Text Available Taking advantage of the high reliability, multiphase permanent magnet synchronous motors (PMSMs, such as five-phase PMSM and six-phase PMSM, are widely used in fault-tolerant control applications. And one of the important fault-tolerant control problems is fault diagnosis. In most existing literatures, the fault diagnosis problem focuses on the three-phase PMSM. In this paper, compared to the most existing fault diagnosis approaches, a fault diagnosis method for Interturn short circuit (ITSC fault of five-phase PMSM based on the trust region algorithm is presented. This paper has two contributions. (1 Analyzing the physical parameters of the motor, such as resistances and inductances, a novel mathematic model for ITSC fault of five-phase PMSM is established. (2 Introducing an object function related to the Interturn short circuit ratio, the fault parameters identification problem is reformulated as the extreme seeking problem. A trust region algorithm based parameter estimation method is proposed for tracking the actual Interturn short circuit ratio. The simulation and experimental results have validated the effectiveness of the proposed parameter estimation method.

  9. Physically-based modeling of speed sensors for fault diagnosis and fault tolerant control in wind turbines

    Science.gov (United States)

    Weber, Wolfgang; Jungjohann, Jonas; Schulte, Horst

    2014-12-01

    In this paper, a generic physically-based modeling framework for encoder type speed sensors is derived. The consideration takes into account the nominal fault-free and two most relevant fault cases. The advantage of this approach is a reconstruction of the output waveforms in dependence of the internal physical parameter changes which enables a more accurate diagnosis and identification of faulty incremental encoders i.a. in wind turbines. The objectives are to describe the effect of the tilt and eccentric of the encoder disk on the digital output signals and the influence of the accuracy of the speed measurement in wind turbines. Simulation results show the applicability and effectiveness of the proposed approach.

  10. Robust recurrent neural network modeling for software fault detection and correction prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Q.P. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: g0305835@nus.edu.sg; Xie, M. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Ng, S.H. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: isensh@nus.edu.sg; Levitin, G. [Israel Electric Corporation, Reliability and Equipment Department, R and D Division, Aaifa 31000 (Israel)]. E-mail: levitin@iec.co.il

    2007-03-15

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.

  11. Optimization of the reconstruction and anti-aliasing filter in a Wiener filter system

    NARCIS (Netherlands)

    Wesselink, J.M.; Berkhoff, A.P.

    2006-01-01

    This paper discusses the influence of the reconstruction and anti-aliasing filters on the performance of a digital implementation of a Wiener filter for active noise control. The overall impact will be studied in combination with a multi-rate system approach. A reconstruction and anti-aliasing filte

  12. Optimization of the reconstruction and anti-aliasing filter in a wiener filter system

    NARCIS (Netherlands)

    Wesselink, J.M.; Berkhoff, A.P.

    2006-01-01

    This paper discusses the influence of the reconstruction and anti-aliasing filters on the performance of a digital implementation of a Wiener filter for active noise control. The overall impact will be studied in combination with a multi-rate system approach. A reconstruction and anti-aliasing filte

  13. Model based fault diagnosis in a centrifugal pump application using structural analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik;

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are use...... it to an industrial benchmark. The benchmark tests have shown that the algorithm is capable of detection and isolation of five different faults in the mechanical and hydraulic parts of the pump.......A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  14. Model Based Fault Diagnosis in a Centrifugal Pump Application using Structural Analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik;

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are use...... it to an industrial benchmark. The benchmark tests have shown that the algorithm is capable of detection and isolation of five different faults in the mechanical and hydraulic parts of the pump.......A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  15. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  16. The influence of fault geometry and frictional contact properties on slip surface behavior and off-fault damage: insights from quasi-static modeling of small strike-slip faults from the Sierra Nevada, CA

    Science.gov (United States)

    Ritz, E.; Pollard, D. D.

    2011-12-01

    Geological and geophysical investigations demonstrate that faults are geometrically complex structures, and that the nature and intensity of off-fault damage is spatially correlated with geometric irregularities of the slip surfaces. Geologic observations of exhumed meter-scale strike-slip faults in the Bear Creek drainage, central Sierra Nevada, CA, provide insight into the relationship between non-planar fault geometry and frictional slip at depth. We investigate natural fault geometries in an otherwise homogeneous and isotropic elastic material with a two-dimensional displacement discontinuity method (DDM). Although the DDM is a powerful tool, frictional contact problems are beyond the scope of the elementary implementation because it allows interpenetration of the crack surfaces. By incorporating a complementarity algorithm, we are able to enforce appropriate contact boundary conditions along the model faults and include variable friction and frictional strength. This tool allows us to model quasi-static slip on non-planar faults and the resulting deformation of the surrounding rock. Both field observations and numerical investigations indicate that sliding along geometrically discontinuous or irregular faults may lead to opening of the fault and the formation of new fractures, affecting permeability in the nearby rock mass and consequently impacting pore fluid pressure. Numerical simulations of natural fault geometries provide local stress fields that are correlated to the style and spatial distribution of off-fault damage. We also show how varying the friction and frictional strength along the model faults affects slip surface behavior and consequently influences the stress distributions in the adjacent material.

  17. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  18. Observer-based and Regression Model-based Detection of Emerging Faults in Coal Mills

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Lin, Bao; Jørgensen, Sten Bay

    2006-01-01

    In order to improve the reliability of power plants it is important to detect fault as fast as possible. Doing this it is interesting to find the most efficient method. Since modeling of large scale systems is time consuming it is interesting to compare a model-based method with data driven ones....... In this paper three different fault detection approaches are compared using a example of a coal mill, where a fault emerges. The compared methods are based on: an optimal unknown input observer, static and dynamic regression model-based detections. The conclusion on the comparison is that observer-based scheme...

  19. An approach to secure weather and climate models against hardware faults

    Science.gov (United States)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  20. Comparative analysis of zero aliasing logarithmic mapped optimal trade-off correlation filter

    Science.gov (United States)

    Tehsin, Sara; Rehman, Saad; Bilal, Ahmed; Chaudry, Qaiser; Saeed, Omer; Abbas, Muhammad; Young, Rupert

    2017-05-01

    Correlation filters are a well established means for target recognition tasks. However, the unintentional effect of circular correlation has a negative influence on the performance of correlation filters as they are implemented in frequency domain. The effects of aliasing are minimized by introducing zero aliasing constraints in the template and test image. In this paper, the comparative analysis of logarithmic zero aliasing optimal trade off correlation filters has been carried out for different types of target distortions. The zero aliasing Maximum Average Correlation Height (MACH) filter has been identified as the best choice based on our research for achieving enhanced results in the presence of any type of variance which are discussed in results section. The reformulation of the MACH expressions with zero aliasing has been made to demonstrate the achievable enhancement to the logarithmic MACH filter in target detection applications.

  1. A Frequency Domain Approach to Registration of Aliased Images with Application to Super-resolution

    Directory of Open Access Journals (Sweden)

    Vandewalle Patrick

    2006-01-01

    Full Text Available Super-resolution algorithms reconstruct a high-resolution image from a set of low-resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low-resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high-resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher-resolution final image.

  2. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  3. Insights into the damage zones in fault-bend folds from geomechanical models and field data

    Science.gov (United States)

    Ju, Wei; Hou, Guiting; Zhang, Bo

    2014-01-01

    Understanding the rock mass deformation and stress states, the fracture development and distribution are critical to a range of endeavors including oil and gas exploration and development, and geothermal reservoir characterization and management. Geomechanical modeling can be used to simulate the forming processes of faults and folds, and predict the onset of failure and the type and abundance of deformation features along with the orientations and magnitudes of stresses. This approach enables the development of forward models that incorporate realistic mechanical stratigraphy (e.g., the bed thickness, bedding planes and competence contrasts), include faults and bedding-slip surfaces as frictional sliding interfaces, reproduce the geometry of the fold structures, and allow tracking strain and stress through the whole deformation process. In this present study, we combine field observations and finite element models to calibrate the development and distribution of fractures in the fault-bend folds, and discuss the mechanical controls (e.g., the slip displacement, ramp cutoff angle, frictional coefficient of interlayers and faults) that are able to influence the development and distribution of fractures during fault-bend folding. A linear relationship between the slip displacement and the fracture damage zone, the ramp cutoff angle and the fracture damage zone, and the frictional coefficient of interlayers and faults and the fracture damage zone was established respectively based on the geomechanical modeling results. These mechanical controls mentioned above altogether contribute to influence and control the development and distribution of fractures in the fault-bend folds.

  4. Fault Tolerant Controller Design for a Faulty UAV Using Fuzzy Modeling Approach

    Directory of Open Access Journals (Sweden)

    Moshu Qian

    2016-01-01

    Full Text Available We address a fault tolerant control (FTC issue about an unmanned aerial vehicle (UAV under possible simultaneous actuator saturation and faults occurrence. Firstly, the Takagi-Sugeno fuzzy models representing nonlinear flight control systems (FCS for an UAV with unknown disturbances and actuator saturation are established. Then, a normal H-infinity tracking controller is presented using an online estimator, which is introduced to weaken the saturation effect. Based on the normal tracking controller, we propose an adaptive fault tolerant tracking controller (FTTC to solve actuator loss of effectiveness (LOE fault problem. Compared with previous work, this approach developed in our research need not rely on any fault diagnosis unit and is easily applied in engineering. Finally, these results in simulation indicate the efficiency of our presented FTC scheme.

  5. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory.

    Science.gov (United States)

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-18

    Sensor data fusion plays an important role in fault diagnosis. Dempster-Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  6. GNAR-GARCH model and its application in feature extraction for rolling bearing fault diagnosis

    Science.gov (United States)

    Ma, Jiaxin; Xu, Feiyun; Huang, Kai; Huang, Ren

    2017-09-01

    Given its simplicity of modeling and sensitivity to condition variations, time series model is widely used in feature extraction to realize fault classification and diagnosis. However, nonlinear and nonstationary characteristics common in fault signals of rolling bearing bring challenges to the diagnosis. In this paper, a hybrid model, the combination of a general expression for linear and nonlinear autoregressive (GNAR) model and a generalized autoregressive conditional heteroscedasticity (GARCH) model, (i.e., GNAR-GARCH), is proposed and applied to rolling bearing fault diagnosis. An exact expression of GNAR-GARCH model is given. Maximum likelihood method is used for parameter estimation and modified Akaike Information Criterion is adopted for structure identification of GNAR-GARCH model. The main advantage of this novel model over other models is that the combination makes the model suitable for nonlinear and nonstationary signals. It is verified with statistical tests that contain comparisons among the different time series models. Finally, GNAR-GARCH model is applied to fault diagnosis by modeling mechanical vibration signals including simulation and real data. With the parameters estimated and taken as feature vectors, k-nearest neighbor algorithm is utilized to realize the classification of fault status. The results show that GNAR-GARCH model exhibits higher accuracy and better performance than do other models.

  7. Faulting and block rotation in the Afar triangle, East Africa: The Danakil "crank-arm" model

    Science.gov (United States)

    Souriot, T.; Brun, J.-P.

    1992-10-01

    Several domains of contrasted extensional deformation have been identified in the southern Afar triangle (East Africa) from fault patterns analyzed with panchromatic stereoscopic SPOT (Système Probatoire d'Observation de la Terre) images. Stretching directions and statistical orientation and offset variations of faults fit with the Danakii "crank-arm" model of Sichler: A 10° sinistral rotation of the Danakil block explains the fault geometry and dextral block rotation in the southern part of the Afar triangle, as well as the oblique extension in the Tadjoura Gulf. Analogue modeling supports this interpretation.

  8. Spectral element modelling of fault-plane reflections arising from fluid pressure distributions

    Science.gov (United States)

    Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.

    2007-01-01

    The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model

  9. Modeling of fault reactivation and induced seismicity during hydraulic fracturing of shale-gas reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric; Moridis, George J.

    2013-07-01

    We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned towards conditions usually encountered in the Marcellus shale play in the Northeastern US at an approximate depth of 1500 m (~;;4,500 feet). Our modeling simulations indicate that when faults are present, micro-seismic events are possible, the magnitude of which is somewhat larger than the one associated with micro-seismic events originating from regular hydraulic fracturing because of the larger surface area that is available for rupture. The results of our simulations indicated fault rupture lengths of about 10 to 20 m, which, in rare cases can extend to over 100 m, depending on the fault permeability, the in situ stress field, and the fault strength properties. In addition to a single event rupture length of 10 to 20 m, repeated events and aseismic slip amounted to a total rupture length of 50 m, along with a shear offset displacement of less than 0.01 m. This indicates that the possibility of hydraulically induced fractures at great depth (thousands of meters) causing activation of faults and creation of a new flow path that can reach shallow groundwater resources (or even the surface) is remote. The expected low permeability of faults in producible shale is clearly a limiting factor for the possible rupture length and seismic magnitude. In fact, for a fault that is initially nearly-impermeable, the only possibility of larger fault slip event would be opening by hydraulic fracturing; this would allow pressure to penetrate the matrix along the fault and to reduce the frictional strength over a sufficiently large fault surface patch. However, our simulation results show that if the fault is initially impermeable, hydraulic fracturing along the fault results in numerous small micro-seismic events along with the propagation, effectively

  10. Fault-related fold styles and progressions in fold-thrust belts: Insights from sandbox modeling

    Science.gov (United States)

    Yan, Dan-Ping; Xu, Yan-Bo; Dong, Zhou-Bin; Qiu, Liang; Zhang, Sen; Wells, Michael

    2016-03-01

    Fault-related folds of variable structural styles and assemblages commonly coexist in orogenic belts with competent-incompetent interlayered sequences. Despite their commonality, the kinematic evolution of these structural styles and assemblages are often loosely constrained because multiple solutions exist in their structural progression during tectonic restoration. We use a sandbox modeling instrument with a particle image velocimetry monitor to test four designed sandbox models with multilayer competent-incompetent materials. Test results reveal that decollement folds initiate along selected incompetent layers with decreasing velocity difference and constant vorticity difference between the hanging wall and footwall of the initial fault tips. The decollement folds are progressively converted to fault-propagation folds and fault-bend folds through development of fault ramps breaking across competent layers and are followed by propagation into fault flats within an upper incompetent layer. Thick-skinned thrust is produced by initiating a decollement fault within the metamorphic basement. Progressive thrusting and uplifting of the thick-skinned thrust trigger initiation of the uppermost incompetent decollement with formation of a decollement fold and subsequent converting to fault-propagation and fault-bend folds, which combine together to form imbricate thrust. Breakouts at the base of the early formed fault ramps along the lowest incompetent layers, which may correspond to basement-cover contacts, domes the upmost decollement and imbricate thrusts to form passive roof duplexes and constitute the thin-skinned thrust belt. Structural styles and assemblages in each of tectonic stages are similar to that in the representative orogenic belts in the South China, Southern Appalachians, and Alpine orogenic belts.

  11. Experimental determination of the long-term strength and stability of laterally bounding fault zones in CO2 storage reservoirs based on kinetic modeling of fault zone evolution

    Science.gov (United States)

    Samuelson, J. E.; Koenen, M.; Tambach, T.

    2011-12-01

    Long-term sequestration of CO2, harvested from point sources such as coal burning power plants and cement manufactories, in depleted oil and gas reservoirs is considered to be one of the most attractive options for short- to medium-term mitigation of anthropogenic forcing of climate change. Many such reservoirs are laterally bounded by low-permeability fault zones which could potentially be reactivated either by changes in stress state during and after the injection process, and also by alterations in the frictional strength of fault gouge material. Of additional concern is how the stability of the fault zones will change as a result of the influence of supercritical CO2, specifically whether the rate and state frictional constitutive parameters (a, b, DC) of the fault zone will change in such a way as to enhance the likelihood of seismic activity on the fault zone. The short-term influence of CO2 on frictional strength and stability of simulated fault gouges prepared from mixtures of cap rock and reservoir rock has been analyzed recently [Samuelson et al., In Prep.], concluding that CO2 has little influence on frictional constitutive behavior on the timescale of a typical experiment (CO2 is intended to be sequestered, we have chosen to model the long-term mineralogical alteration of a fault zone with a simple starting mineralogy of 33% quartz, 33% illite, and 33% dolomite by weight using the geochemical modeling program PHREEQC and the THERMODDEM database, assuming instantaneous mixing of the CO2 with the fault gouge. The geochemical modeling predicts that equilibrium will be reached between fault gouge, reservoir brine, and CO2 in approximately 440 years, assuming an average grain-size (davg) of 20 μm, and ~90 years assuming davg =4 μm, a reasonable range of grain-sizes for natural fault gouges. The main change to gouge mineralogy comes from the complete dissolution of illite, and the precipitation of muscovite. The final equilibrium mineralogy of the fault

  12. Numerical model of formation of a 3-D strike-slip fault system

    Science.gov (United States)

    Chemenda, Alexandre I.; Cavalié, Olivier; Vergnolle, Mathilde; Bouissou, Stéphane; Delouis, Bertrand

    2016-01-01

    The initiation and the initial evolution of a strike-slip fault are modeled within an elastoplasticity constitutive framework taking into account the evolution of the hardening modulus with inelastic straining. The initial and boundary conditions are similar to those of the Riedel shear experiment. The models first deform purely elastically. Then damage (inelastic deformation) starts at the model surface. The damage zone propagates both normal to the forming fault zone and downwards. Finally, it affects the whole layer thickness, forming flower-like structure in cross-section. At a certain stage, a dense set of parallel Riedel shears forms at shallow depth. A few of these propagate both laterally and vertically, while others die. The faults first propagate in-plane, but then rapidly change direction to make a larger angle with the shear axis. New fault segments form as well, resulting in complex 3-D fault zone architecture. Different fault segments accommodate strike-slip and normal displacements, which results in the formation of valleys and rotations along the fault system.

  13. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    Science.gov (United States)

    Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  14. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  15. A Fault Diagnosis Approach for Gears Based on IMF AR Model and SVM

    Directory of Open Access Journals (Sweden)

    Yu Yang

    2008-05-01

    Full Text Available An accurate autoregressive (AR model can reflect the characteristics of a dynamic system based on which the fault feature of gear vibration signal can be extracted without constructing mathematical model and studying the fault mechanism of gear vibration system, which are experienced by the time-frequency analysis methods. However, AR model can only be applied to stationary signals, while the gear fault vibration signals usually present nonstationary characteristics. Therefore, empirical mode decomposition (EMD, which can decompose the vibration signal into a finite number of intrinsic mode functions (IMFs, is introduced into feature extraction of gear vibration signals as a preprocessor before AR models are generated. On the other hand, by targeting the difficulties of obtaining sufficient fault samples in practice, support vector machine (SVM is introduced into gear fault pattern recognition. In the proposed method in this paper, firstly, vibration signals are decomposed into a finite number of intrinsic mode functions, then the AR model of each IMF component is established; finally, the corresponding autoregressive parameters and the variance of remnant are regarded as the fault characteristic vectors and used as input parameters of SVM classifier to classify the working condition of gears. The experimental analysis results show that the proposed approach, in which IMF AR model and SVM are combined, can identify working condition of gears with a success rate of 100% even in the case of smaller number of samples.

  16. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  17. Explaining the current geodetic field with geological models: A case study of the Haiyuan fault system

    Science.gov (United States)

    Daout, S.; Jolivet, R.; Lasserre, C.; Doin, M. P.; Barbot, S.; Peltzer, G.; Tapponnier, P.

    2015-12-01

    Oblique convergence across Tibet leads to slip partitioning with the co-existence of strike-slip, normal and thrust motion in major fault systems. While such complexity has been shown at the surface, the question is to understand how faults interact and accumulate strain at depth. Here, we process InSAR data across the central Haiyuan restraining bend, at the north-eastern boundary of the Tibetan plateau and show that the surface complexity can be explained by partitioning of a uniform deep-seated convergence rate. We construct a time series of ground deformation, from Envisat radar data spanning from 2001-2011 period, across a challenging area because of the high jump in topography between the desert environment and the plateau. To improve the signal-to-noise ratio, we used the latest Synthetic Aperture Radar interferometry methodology, such as Global Atmospheric Models (ERA Interim) and Digital Elevation Model errors corrections before unwrapping. We then developed a new Bayesian approach, jointly inverting our InSAR time series together with published GPS displacements. We explore fault system geometry at depth and associated slip rates and determine a uniform N86±7E° convergence rate of 8.45±1.4 mm/yr across the whole fault system with a variable partitioning west and east of a major extensional fault-jog. Our 2D model gives a quantitative understanding of how crustal deformation is accumulated by the various branches of this thrust/strike-slip fault system and demonstrate the importance of the geometry of the Haiyuan Fault, controlling the partitioning or the extrusion of the block motion. The approach we have developed would allow constraining the low strain accumulation along deep faults, like for example for the blind thrust faults or possible detachment in the San Andreas "big bend", which are often associated to a poorly understood seismic hazard.

  18. Establishment of a Fault Prognosis Model Using Wavelet Neural Networks and Its Engineering Application

    Institute of Scientific and Technical Information of China (English)

    LIU Qi-peng; FENG Quan-ke; XIONG Wei

    2004-01-01

    Fault diagnosis is confronted with two problems; how to "measure" the growth of a fault and how to predict the remaining useful lifetime of such a failing component or machine.This paper attempts to solve these two problems by proposing a model of fault prognosis using wavelet basis neural network.Gaussian radial basis functions and Mexican hat wavelet frames are used us scaling functions and wavelets,respectively.The centers of the basis functions are calculated using a dyadic expansion scheme and a k-means clustering algorithm.

  19. Model-Based Fault Tolerant Control for Hybrid Dynamic Systems with Sensor Faults%一类带有传染器故障的混合系统的容错控制

    Institute of Scientific and Technical Information of China (English)

    杨浩; 冒泽慧; 姜斌

    2006-01-01

    A model-based fault tolerant control approach for hybrid linear dynamic systems is proposed in this paper. The proposed method, taking advantage of reliable control, can maintain the performance of the faulty system during the time delay of fault detection and diagnosis (FDD) and fault accommodation (FA), which can be regarded as the first line of defence against sensor faults.Simulation results of a three-tank system with sensor fault are given to show the efficiency of the method.

  20. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  1. A simple method to reduce aliasing artifacts in color flow mode imaging

    DEFF Research Database (Denmark)

    Udesen, Jesper; Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2005-01-01

    It is a well known limitation in conventional blood velocity estimation using a phase estimation approach, that aliasing artifacts are present, when the blood velocities exceed a value determined by half the pulse repetition frequency (the Nyquist frequency). This paper proposes a simple anti......-aliasing discriminator (AAD) method based on using two different pulse repetition frequencies to increase the aliasing limit to twice the Nyquist frequency. The method is evaluated in simulations using the Field II program. The axial velocity in a virtual blood vessel is found along one axial line, where N=10 emissions...

  2. An Improved NHPP Model with Time-Varying Fault Removal Delay

    Institute of Scientific and Technical Information of China (English)

    Xue Yang; Nan Sang; Hang Lei

    2008-01-01

    In this paper, an improved NHPP model isproposed by replacing constant fault removal time withtime-varying fault removal delay in NHPP model,proposed by Daniel R Jeske. In our model, a time-dependent delay function is established to fit the faultremoval process. By using two sets of practical data, thedescriptive and predictive abilities of the improved NHPPmodel are compared with those of the NHPP model, G-Omodel, and delayed S-shape model. The results show that the improved model can fit and predict the data better.

  3. Near-Surface Fault Structures of the Seulimuem Segment Based on Electrical Resistivity Model

    Science.gov (United States)

    Ismail, Nazli; Yanis, Muhammad; Idris, Syafrizal; Abdullah, Faisal; Hanafiah, Bukhari

    2017-05-01

    The Great Sumatran Fault (GSF) system is arc-parallel strike-slip fault system along the volcanic front related to the oblique subduction of the oceanic Indo-Australian plate. Large earthquakes along the southern GSF since 1892 have been reported, but the Seulimuem segment at the northernmost Sumatran has not produced large earthquakes in the past 100 years. The 200-km-long segment is considered to be a seismic gap. Detailed geological study of the fault and thus its surface trace locations, late Quaternary slip rate, and rupture history are urgently needed for earthquake disaster mitigation in the future. However, finding a suitable area for paleoseismic trenching is an obstacle when the fault traces are not clearly shown on the surface. We have conducted geoelectrical measurement in Lamtamot area of Aceh Besar District in order to locate the fault line for paleoseismic excavation. Apparent resistivity data were collected along 40 m profile parallel to the planned trenching site. The 2D electrical resistivity model provided evidence of some resistivity anomalies by high lateral contrast. This anomaly almost coincides with the topographic scarp which is modified by agriculture on the surface at the northern part of Lamtamot. The steep dipping electrical contrast may correspond to a fault. However, the model does not resolve well evidences from minor faults that can be related to the presence of surface ruptures. A near fault paleoseismic investigation requires trenching across the fault in order to detect and analyze the geological record of the past large earthquakes along the Seulimuem segment.

  4. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    Science.gov (United States)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and

  5. Fault Tolerance Assistant (FTA): An Exception Handling Programming Model for MPI Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Aiman [Univ. of Chicago, IL (United States). Dept. of Computer Science; Laguna, Ignacio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sato, Kento [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Islam, Tanzima [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-23

    Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enables failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.

  6. Bayesian Network Based Fault Prognosis via Bond Graph Modeling of High-Speed Railway Traction Device

    Directory of Open Access Journals (Sweden)

    Yunkai Wu

    2015-01-01

    component-level faults accurately for a high-speed railway traction system, a fault prognosis approach via Bayesian network and bond graph modeling techniques is proposed. The inherent structure of a railway traction system is represented by bond graph model, based on which a multilayer Bayesian network is developed for fault propagation analysis and fault prediction. For complete and incomplete data sets, two different parameter learning algorithms such as Bayesian estimation and expectation maximization (EM algorithm are adopted to determine the conditional probability table of the Bayesian network. The proposed prognosis approach using Pearl’s polytree propagation algorithm for joint probability reasoning can predict the failure probabilities of leaf nodes based on the current status of root nodes. Verification results in a high-speed railway traction simulation system can demonstrate the effectiveness of the proposed approach.

  7. Takagi Sugeno fuzzy expert model based soft fault diagnosis for two tank interacting system

    Directory of Open Access Journals (Sweden)

    Manikandan Pandiyan

    2014-09-01

    Full Text Available The inherent characteristics of fuzzy logic theory make it suitable for fault detection and diagnosis (FDI. Fault detection can benefit from nonlinear fuzzy modeling and fault diagnosis can profit from a transparent reasoning system, which can embed operator experience, but also learn from experimental and/or simulation data. Thus, fuzzy logic-based diagnostic is advantageous since it allows the incorporation of a-priori knowledge and lets the user understand the inference of the system. In this paper, the successful use of a fuzzy FDI based system, based on dynamic fuzzy models for fault detection and diagnosis of an industrial two tank system is presented. The plant data is used for the design and validation of the fuzzy FDI system. The validation results show the effectiveness of this approach.

  8. Power transformer fault diagnosis model based on rough set theory with fuzzy representation

    Institute of Scientific and Technical Information of China (English)

    Li Minghua; Dong Ming; Yan Zhang

    2007-01-01

    Objective Due to the incompleteness and complexity of fault diagnosis for power transformers, a comprehensive rough-fuzzy scheme for solving fault diagnosis problems is presented. Fuzzy set theory is used both for representation of incipient faults' indications and producing a fuzzy granulation of the feature space. Rough set theory is used to obtain dependency rules that model indicative regions in the granulated feature space. The fuzzy membership functions corresponding to the indicative regions, modelled by rules, are stored as cases. Results Diagnostic conclusions are made using a similarity measure based on these membership functions. Each case involves only a reduced number of relevant features making this scheme suitable for fault diagnosis. Conclusion Superiority of this method in terms of classification accuracy and case generation is demonstrated.

  9. Fault identification using piezoelectric impedance measurement and model-based intelligent inference with pre-screening

    Science.gov (United States)

    Shuai, Q.; Zhou, K.; Zhou, Shiyu; Tang, J.

    2017-04-01

    While piezoelectric impedance/admittance measurements have been used for fault detection and identification, the actual identification of fault location and severity remains to be a challenging topic. On one hand, the approach that uses these measurements entertains high detection sensitivity owing to the high-frequency actuation/sensing nature. On the other hand, high-frequency analysis requires high dimensionality in the model and the subsequent inverse analysis contains a very large number of unknowns which often renders the identification problem under-determined. A new fault identification algorithm is developed in this research for piezoelectric impedance/admittance based measurement. Taking advantage of the algebraic relation between the sensitivity matrix and the admittance change measurement, we devise a pre-screening scheme that can rank the likelihoods of fault locations with estimated fault severity levels, which drastically reduces the fault parameter space. A Bayesian inference approach is then incorporated to pinpoint the fault location and severity with high computational efficiency. The proposed approach is examined and validated through case studies.

  10. FE modeling of present day tectonic stress along the San Andreas Fault zone

    OpenAIRE

    Koirala, Matrika Prasad; Hauashi, Daigoro; 林, 大五郎

    2009-01-01

    F E modeling under plane stress condition is used to analyze the state of stress in and around the San Andreas Fault (SAF) System taking whole area of California. In this study we mainly focus on the state of stress at the general seismogenic depth of 12 km, imposing elastic rheology. The purpose of the present study is to simulate the regional stress field, displacement vectors and failures. Stress perturbation due to major fault, its geometry and major branches are analyzed. Depthwise varia...

  11. MAIN REGULARITIES OF FAULTING IN LITHOSPHERE AND THEIR APPLICATION (BASED ON PHYSICAL MODELLING RESULTS

    Directory of Open Access Journals (Sweden)

    S. A. Bornyakov

    2015-09-01

    Full Text Available Results of long-term experimental studies and modelling of faulting are briefly reviewed, and research methods and the-state-of-art issues are described. The article presents the main results of faulting modelling with the use of non-transparent elasto-viscous plastic and optically active models. An area of active dynamic influence of fault (AADIF is the term introduced to characterise a fault as a 3D geological body. It is shown that AADIF's width (М is determined by thickness of the layer wherein a fault occurs (Н, its viscosity (η and strain rate (V. Multiple correlation equations are proposed to show relationships between AADIF's width (М, H, η and V for faults of various morphological and genetic types. The irregularity of AADIF in time and space is characterised in view of staged formation of the internal fault structure of such areas and geometric and dynamic parameters of AADIF which are changeable along the fault strike. The authors pioneered in application of the open system conception to find explanations of regularities of structure formation in AADIFs. It is shown that faulting is a synergistic process of continuous changes of structural levels of strain, which differ in manifestation of specific self-similar fractures of various scales. Such levels are changeable due to self-organization processes of fracture systems. Fracture dissipative structures (FDS is the term introduced to describe systems of fractures that are subject to self-organization. It is proposed to consider informational entropy and fractal dimensions in order to reveal FDS in AADIF. Studied are relationships between structure formation in AADIF and accompanying processes, such as acoustic emission and terrain development above zones wherein faulting takes place. Optically active elastic models were designed to simulate the stress-and-strain state of AADIF of main standard types of fault jointing zones and their analogues in nature, and modelling results are

  12. Application of black-box models to HVAC systems for fault detection

    NARCIS (Netherlands)

    Peitsman, H.C.; Bakker, V.E.

    1996-01-01

    This paper describes the application of black-box models for fault detection and diagnosis (FDD) in heating, ventilat-ing, and air-conditioning (HVAC) systems. In this study, mul-tiple-input/single-output (MISO) ARX models and artificial neural network (ANN) models are used. The ARX models are exami

  13. Application of black-box models to HVAC systems for fault detection

    NARCIS (Netherlands)

    Peitsman, H.C.; Bakker, V.E.

    1996-01-01

    This paper describes the application of black-box models for fault detection and diagnosis (FDD) in heating, ventilat-ing, and air-conditioning (HVAC) systems. In this study, mul-tiple-input/single-output (MISO) ARX models and artificial neural network (ANN) models are used. The ARX models are

  14. Investigating the possible effects of salt in the fault zones on rates of seismicity - insights from analogue and numerical modeling

    Science.gov (United States)

    Urai, Janos; Kettermann, Michael; Abe, Steffen

    2017-04-01

    The presence of salt in dilatant normal faults may have a strong influence on fault mechanics and related seismicity. However, we lack a detailed understanding of these processes. This study is based on the geological setting of the Groningen area. During tectonic faulting in the Groningen area, rock salt may have flown downwards into dilatant faults, which thus may contain lenses of rock salt at present. Because of its viscous properties, the presence of salt lenses in a fault may introduce a strain-rate dependency to the faulting and affect the distribution of magnitudes of seismic events. We present a "proof of concept" showing that the above processes can be investigated using a combination of analogue and numerical modeling. Full scaling and discussion of the importance of these processes to induced seismicity in Groningen require further, more detailed study. The analogue experiments are based on a simplified stratigraphy of the Groningen area, where it is generally thought that most of the Rotliegend faulting has taken place in the Jurassic, after deposition of the Zechstein. This is interpreted to mean that at the time of faulting the sulphates were brittle anhydrite. If these layers were sufficiently brittle to fault in a dilatant fashion, rock salt could flow downwards into the dilatant fractures. To test this hypothesis, we use sandbox experiments where we combine cohesive powder as analog for brittle anhydrites and carbonates with viscous salt analogs to explore the developing fault geometry and the resulting distribution of salt in the faults. In the numerical models we investigate the stick-slip behavior of fault zones containing ductile material using the Discrete Element Method (DEM). Results show that the DEM approach is in principle suitable for the modeling of the seismicity of faults containing salt: the stick-slip motion of the fault becomes dependent on shear loading rate with a modification of the frequency magnitude distribution of the

  15. Modelling roughness evolution and debris production in faults using discrete particles

    Science.gov (United States)

    Mair, Karen; Abe, Steffen

    2017-04-01

    The frictional strength and stability (hence seismic potential) of faults in the brittle part of the crust is closely linked to fault roughness evolution and debris production during accumulated slip. The relevant processes may also control the dynamics of rock-slides, avalanches and sub glacial slip thus are of general interest in several fields. The quantitative characterisation of fault surfaces in the field (e.g. Candela et al. JGR, 2012) has helped build a picture of fault roughness across many orders of magnitude, however, since fault zones are generally not exposed during slip and gouge zones rarely preserved, the mechanical implications of evolving roughness and the important role of debris or gouge in fault zone evolution remain elusive. Here we investigate the interplay between fault roughness evolution and gouge production using 3D Discrete Element Method (DEM) Boundary Erosion Models. Our fault walls are composed of many particles or clusters stuck together with breakable bonds. When bond strength is exceeded, the walls fracture to produce erodible boundaries and a debris filled fault zone that evolves with accumulated slip. We slide two initially bare surfaces past each other under a range of normal stresses, tracking the evolving topography of eroded fault walls, the granular debris generated and the associated mechanical behaviour. The development of slip parallel striations, reminiscent of those found in natural faults, are commonly observed, however often as transient rather than persistent features. At the higher normal stresses studied, we observe a two stage wear-like gouge production where an initial 'running-in' high production rate saturates as debris accumulates and separates the walls. As shear, and hence granular debris, accumulates, we see evidence of grain size based sorting in the granular layers. Wall roughness and friction mimic this stabilisation, highlighting a direct link between gouge processes, wall roughness evolution and

  16. A Desk-top tutorial Demonstration of Model-based Fault Detection and Diagnosis

    OpenAIRE

    Shi, John Z.; Elshanti, Ali; Gu, Fengshou; Ball, Andrew

    2007-01-01

    In this paper, a demonstration on the model-based approach for fault detection has been presented. The aim of this demo is to provide students a desk-top tool to start learning model-based approach. The demo works on a traditional three-tank system. After a short review of the model-based approach, this paper emphasizes on two difficulties often asked by students when they start learning model-based approach: how to develop a system model and how to generate residual for fault detection. The ...

  17. Anti-aliasing Wiener filtering for wave-front reconstruction in the spatial-frequency domain for high-order astronomical adaptive-optics systems.

    Science.gov (United States)

    Correia, Carlos M; Teixeira, Joel

    2014-12-01

    Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.

  18. Bond graphs for modelling, control and fault diagnosis of engineering systems

    CERN Document Server

    2017-01-01

    This book presents theory and latest application work in Bond Graph methodology with a focus on: • Hybrid dynamical system models, • Model-based fault diagnosis, model-based fault tolerant control, fault prognosis • and also addresses • Open thermodynamic systems with compressible fluid flow, • Distributed parameter models of mechanical subsystems. In addition, the book covers various applications of current interest ranging from motorised wheelchairs, in-vivo surgery robots, walking machines to wind-turbines.The up-to-date presentation has been made possible by experts who are active members of the worldwide bond graph modelling community. This book is the completely revised 2nd edition of the 2011 Springer compilation text titled Bond Graph Modelling of Engineering Systems – Theory, Applications and Software Support. It extends the presentation of theory and applications of graph methodology by new developments and latest research results. Like the first edition, this book addresses readers in a...

  19. Analogue modelling of the effect of topographic steps in the development of strike-slip faults

    Science.gov (United States)

    Tomás, Ricardo; Duarte, João C.; Rosas, Filipe M.; Schellart, Wouter; Strak, Vincent

    2016-04-01

    Strike-slip faults often cut across regions of overthickened crust, such as oceanic plateaus or islands. These morphological steps likely cause a local variation in the stress field that controls the geometry of these systems. Such variation in the stress field will likely play a role in strain localization and associated seismicity. This is of particular importance since wrench systems can produce very high magnitude earthquakes. However, such systems have been generally overlooked and are still poorly understood. In this work we will present a set of analogue models that were designed with the objective of understanding how a step in the morphology affects the development of a strike-slip fault system. The models consist of a sand-cake with two areas with different thicknesses connected by a gentle ramp perpendicular to a dextral strike-slip basal fault. The sand-cake lies above two basal plates to which the dextral relative motion was imposed using a stepping-motor. Our results show that a Riedel fault system develops across the two flat areas. However, a very asymmetric fault pattern develops across the morphological step. A deltoid constrictional bulge develops in the thinner part of the model, which progressively acquires a sigmoidal shape with increasing offset. In the thicker part of the domain, the deformation is mostly accommodated by Riedel faults and the one closer to the step acquires a relatively lower angle. Associated to this Riedel fault a collapse area develops and amplifies with increasing offset. For high topographic steps, the propagation of the main fault across the step area only occurs in the final stages of the experiments, contrary to what happens when the step is small or inexistent. These results strongly suggest a major impact of the variation of topography on the development of strike-slip fault systems. The step in the morphology causes variations in the potential energy that changes the local stress field (mainly the vertical

  20. Exploring tectonomagmatic controls on mid-ocean ridge faulting and morphology with 3-D numerical models

    Science.gov (United States)

    Howell, S. M.; Ito, G.; Behn, M. D.; Olive, J. A. L.; Kaus, B.; Popov, A.; Mittelstaedt, E. L.; Morrow, T. A.

    2016-12-01

    Previous two-dimensional (2-D) modeling studies of abyssal-hill scale fault generation and evolution at mid-ocean ridges have predicted that M, the ratio of magmatic to total extension, strongly influences the total slip, spacing, and rotation of large faults, as well as the morphology of the ridge axis. Scaling relations derived from these 2-D models broadly explain the globally observed decrease in abyssal hill spacing with increasing ridge spreading rate, as well as the formation of large-offset faults close to the ends of slow-spreading ridge segments. However, these scaling relations do not explain some higher resolution observations of segment-scale variability in fault spacing along the Chile Ridge and the Mid-Atlantic Ridge, where fault spacing shows no obvious correlation with M. This discrepancy between observations and 2-D model predictions illuminates the need for three-dimensional (3-D) numerical models that incorporate the effects of along-axis variations in lithospheric structure and magmatic accretion. To this end, we use the geodynamic modeling software LaMEM to simulate 3-D tectono-magmatic interactions in a visco-elasto-plastic lithosphere under extension. We model a single ridge segment subjected to an along-axis gradient in the rate of magma injection, which is simulated by imposing a mass source in a plane of model finite volumes beneath the ridge axis. Outputs of interest include characteristic fault offset, spacing, and along-axis gradients in seafloor morphology. We also examine the effects of along-axis variations in lithospheric thickness and off-axis thickening rate. The main objectives of this study are to quantify the relative importance of the amount of magmatic extension and the local lithospheric structure at a given along-axis location, versus the importance of along-axis communication of lithospheric stresses on the 3-D fault evolution and morphology of intermediate-spreading-rate ridges.

  1. Analysis of Dynamics in Multiphysics Modelling of Active Faults

    Directory of Open Access Journals (Sweden)

    Sotiris Alevizos

    2016-09-01

    Full Text Available Instabilities in Geomechanics appear on multiple scales involving multiple physical processes. They appear often as planar features of localised deformation (faults, which can be relatively stable creep or display rich dynamics, sometimes culminating in earthquakes. To study those features, we propose a fundamental physics-based approach that overcomes the current limitations of statistical rule-based methods and allows a physical understanding of the nucleation and temporal evolution of such faults. In particular, we formulate the coupling between temperature and pressure evolution in the faults through their multiphysics energetic process(es. We analyse their multiple steady states using numerical continuation methods and characterise their transient dynamics by studying the time-dependent problem near the critical Hopf points. We find that the global system can be characterised by a homoclinic bifurcation that depends on the two main dimensionless groups of the underlying physical system. The Gruntfest number determines the onset of the localisation phenomenon, while the dynamics are mainly controlled by the Lewis number, which is the ratio of energy diffusion over mass diffusion. Here, we show that the Lewis number is the critical parameter for dynamics of the system as it controls the time evolution of the system for a given energy supply (Gruntfest number.

  2. Correlation between Cu mineralization and major faults using multifractal modelling in the Tarom area (NW Iran)

    Science.gov (United States)

    Nouri, Reza; Jafari, Mohammad Reza; Arian, Mehran; Feizi, Faranak; Afzal, Peyman

    2013-10-01

    The Tarom 1: 100,000 sheet is located within the Cenozoic Tarom-Hashtjin volcano-plutonic belt, NW Iran. Reconstruction of the tectonic and structural setting of the hydrothermal deposits is fundamental to predictive models of different ore deposits. Since fractal/multifractal modelling is an effective instrument for separation of geological and mineralized zones from background, therefore Concentration-Distance to Major Fault (C-DMF) fractal model and distribution of Cu anomalies were used to classify Cu mineralizations according to their distance to major faults. Application of the C-DMF model for the classification of Cu mineralization in the Tarom 1: 100,000 sheet reveals that the main copper mineralizations have a strong correlation with their distance to major faults in the area. The distances of known copper mineralizations having Cu values higher than 2.2 % to major faults are less than 10 km showing a positive correlation between Cu mineralization and tectonic events. Moreover, extreme and high Cu anomalies based on stream sediments and lithogeochemical data were identified by the Number-Size (N-S) fractal model. These anomalies have distances to major faults less than 10 km and validate the results derived via the C-DMF fractal model. The C-DMF fractal modelling can be utilized for the reconnaissance and prospecting of magmatic and hydrothermal deposits.

  3. A spectral synthesis method to suppress aliasing and calibrate for delay errors in Fourier transform correlators

    CERN Document Server

    Kaneko, Tak

    2008-01-01

    Context: Fourier transform (or lag) correlators in radio interferometers can serve as an efficient means of synthesising spectral channels. However aliasing corrupts the edge channels so they usually have to be excluded from the data set. In systems with around 10 channels, the loss in sensitivity can be significant. In addition, the low level of residual aliasing in the remaining channels may cause systematic errors. Moreover, delay errors have been widely reported in implementations of broadband analogue correlators and simulations have shown that delay errors exasperate the effects of aliasing. Aims: We describe a software-based approach that suppresses aliasing by oversampling the cross-correlation function. This method can be applied to interferometers with individually-tracking antennas equipped with a discrete path compensator system. It is based on the well-known property of interferometers where the drift scan response is the Fourier transform of the source's band-limited spectrum. Methods: In this p...

  4. Frequency-Shift a way to Reduce Aliasing in the Complex Cepstrum

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen

    1998-01-01

    The well-known relation between a time signal and its frequency-shifted spectrum is introduced as an excellent tool for reduction of aliasing in the complex cepstrum. Using N points DFTs the frequency-shift property, when used in the right way, will reduce the aliasing error to a size which on av...... on average is identical to the one normally requiring 2N points DFTs. The cost is an insignificant increase in the number of operations compared to the total number needed for the transformation to the complex cepstrum domain......The well-known relation between a time signal and its frequency-shifted spectrum is introduced as an excellent tool for reduction of aliasing in the complex cepstrum. Using N points DFTs the frequency-shift property, when used in the right way, will reduce the aliasing error to a size which...

  5. Model-based fault detection of blade pitch system in floating wind turbines

    Science.gov (United States)

    Cho, S.; Gao, Z.; Moan, T.

    2016-09-01

    This paper presents a model-based scheme for fault detection of a blade pitch system in floating wind turbines. A blade pitch system is one of the most critical components due to its effect on the operational safety and the dynamics of wind turbines. Faults in this system should be detected at the early stage to prevent failures. To detect faults of blade pitch actuators and sensors, an appropriate observer should be designed to estimate the states of the system. Residuals are generated by a Kalman filter and a threshold based on H optimization, and linear matrix inequality (LMI) is used for residual evaluation. The proposed method is demonstrated in a case study that bias and fixed output in pitch sensors and stuck in pitch actuators. The simulation results show that the proposed method detects different realistic fault scenarios of wind turbines under the stochastic external winds.

  6. Robust unknown input observer design for state estimation and fault detection using linear parameter varying model

    Science.gov (United States)

    Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai

    2017-01-01

    This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.

  7. Thrust-wrench fault interference in a brittle medium: new insights from analogue modelling experiments

    Science.gov (United States)

    Rosas, Filipe; Duarte, Joao; Schellart, Wouter; Tomas, Ricardo; Grigorova, Vili; Terrinha, Pedro

    2015-04-01

    We present analogue modelling experimental results concerning thrust-wrench fault interference in a brittle medium, to try to evaluate the influence exerted by different prescribed interference angles in the formation of morpho-structural interference fault patterns. All the experiments were conceived to simulate simultaneous reactivation of confining strike-slip and thrust faults defining a (corner) zone of interference, contrasting with previously reported discrete (time and space) superposition of alternating thrust and strike-slip events. Different interference angles of 60°, 90° and 120° were experimentally investigated by comparing the specific structural configurations obtained in each case. Results show that a deltoid-shaped morpho-structural pattern is consistently formed in the fault interference (corner) zone, exhibiting a specific geometry that is fundamentally determined by the different prescribed fault interference angle. Such angle determines the orientation of the displacement vector shear component along the main frontal thrust direction, determining different fault confinement conditions in each case, and imposing a complying geometry and kinematics of the interference deltoid structure. Model comparison with natural examples worldwide shows good geometric and kinematic similarity, pointing to the existence of matching underlying dynamic process. Acknowledgments This work was sponsored by the Fundação para a Ciência e a Tecnologia (FCT) through project MODELINK EXPL/GEO-GEO/0714/2013.

  8. HTS-FCL EMTDC model considering nonlinear characteristics on fault current and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jae-Young; Lee, Seung-Ryul [Korea Electrotechnology Research Institute (Korea, Republic of)

    2010-06-01

    One of the most serious problems of the KEPCO system is a higher fault current than the CB(Circuit breaker's SCC (Short Circuit Capacity). There are so many alternatives to reduce the higher fault current, such as the isolation of bus ties, enhancement of the CB's SCC, and the application of HVDC-BTB (Back to Back) or FCL (fault current limiter). However, these alternatives have drawbacks from the viewpoint of system stability and cost. As superconductivity technology has been developed, the resistive type (R-type) HTS-FCL (High Temperature Superconductor Fault Current Limiter) offers one of the important alternatives in terms of power loss and cost reduction in solving the fault current problem. To evaluate the accurate transient performance of R-type HTS-FCL, it is necessary for the dynamic simulation model to consider transient characteristics during the quenching and the recovery state. Against this background, this paper presents the new HTS-FCL EMTDC (Electro-Magnetic Transients including Direct Current) model considering the nonlinear characteristics on fault current and temperature.

  9. HTS-FCL EMTDC model considering nonlinear characteristics on fault current and temperature

    Science.gov (United States)

    Yoon, Jae-Young; Lee, Seung-Ryul

    2010-06-01

    One of the most serious problems of the KEPCO system is a higher fault current than the CB(Circuit breaker's SCC (Short Circuit Capacity). There are so many alternatives to reduce the higher fault current, such as the isolation of bus ties, enhancement of the CB's SCC, and the application of HVDC-BTB (Back to Back) or FCL (fault current limiter). However, these alternatives have drawbacks from the viewpoint of system stability and cost. As superconductivity technology has been developed, the resistive type (R-type) HTS-FCL (High Temperature Superconductor Fault Current Limiter) offers one of the important alternatives in terms of power loss and cost reduction in solving the fault current problem. To evaluate the accurate transient performance of R-type HTS-FCL, it is necessary for the dynamic simulation model to consider transient characteristics during the quenching and the recovery state. Against this background, this paper presents the new HTS-FCL EMTDC (Electro-Magnetic Transients including Direct Current) model considering the nonlinear characteristics on fault current and temperature.

  10. Forward and backward models for fault diagnosis based on parallel genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    Yi LIU; Ying LI; Yi-jia CAO; Chuang-xin GUO

    2008-01-01

    In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.

  11. Modeling of coulpled deformation and permeability evolution during fault reactivation induced by deep underground injection of CO2

    Energy Technology Data Exchange (ETDEWEB)

    Cappa, F.; Rutqvist, J.

    2010-06-01

    The interaction between mechanical deformation and fluid flow in fault zones gives rise to a host of coupled hydromechanical processes fundamental to fault instability, induced seismicity, and associated fluid migration. In this paper, we discuss these coupled processes in general and describe three modeling approaches that have been considered to analyze fluid flow and stress coupling in fault-instability processes. First, fault hydromechanical models were tested to investigate fault behavior using different mechanical modeling approaches, including slip interface and finite-thickness elements with isotropic or anisotropic elasto-plastic constitutive models. The results of this investigation showed that fault hydromechanical behavior can be appropriately represented with the least complex alternative, using a finite-thickness element and isotropic plasticity. We utilized this pragmatic approach coupled with a strain-permeability model to study hydromechanical effects on fault instability during deep underground injection of CO{sub 2}. We demonstrated how such a modeling approach can be applied to determine the likelihood of fault reactivation and to estimate the associated loss of CO{sub 2} from the injection zone. It is shown that shear-enhanced permeability initiated where the fault intersects the injection zone plays an important role in propagating fault instability and permeability enhancement through the overlying caprock.

  12. Fault Modeling and Testing for Analog Circuits in Complex Space Based on Supply Current and Output Voltage

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available This paper deals with the modeling of fault for analog circuits. A two-dimensional (2D fault model is first proposed based on collaborative analysis of supply current and output voltage. This model is a family of circle loci on the complex plane, and it simplifies greatly the algorithms for test point selection and potential fault simulations, which are primary difficulties in fault diagnosis of analog circuits. Furthermore, in order to reduce the difficulty of fault location, an improved fault model in three-dimensional (3D complex space is proposed, which achieves a far better fault detection ratio (FDR against measurement error and parametric tolerance. To address the problem of fault masking in both 2D and 3D fault models, this paper proposes an effective design for testability (DFT method. By adding redundant bypassing-components in the circuit under test (CUT, this method achieves excellent fault isolation ratio (FIR in ambiguity group isolation. The efficacy of the proposed model and testing method is validated through experimental results provided in this paper.

  13. A Model of Intelligent Fault Diagnosis of Power Equipment Based on CBR

    Directory of Open Access Journals (Sweden)

    Gang Ma

    2015-01-01

    Full Text Available Nowadays the demand of power supply reliability has been strongly increased as the development within power industry grows rapidly. Nevertheless such large demand requires substantial power grid to sustain. Therefore power equipment’s running and testing data which contains vast information underpins online monitoring and fault diagnosis to finally achieve state maintenance. In this paper, an intelligent fault diagnosis model for power equipment based on case-based reasoning (IFDCBR will be proposed. The model intends to discover the potential rules of equipment fault by data mining. The intelligent model constructs a condition case base of equipment by analyzing the following four categories of data: online recording data, history data, basic test data, and environmental data. SVM regression analysis was also applied in mining the case base so as to further establish the equipment condition fingerprint. The running data of equipment can be diagnosed by such condition fingerprint to detect whether there is a fault or not. Finally, this paper verifies the intelligent model and three-ratio method based on a set of practical data. The resulting research demonstrates that this intelligent model is more effective and accurate in fault diagnosis.

  14. Earthquake nucleation in a stochastic fault model of globally coupled units with interaction delays

    Science.gov (United States)

    Vasović, Nebojša; Kostić, Srđan; Franović, Igor; Todorović, Kristina

    2016-09-01

    In present paper we analyze dynamics of fault motion by considering delayed interaction of 100 all-to-all coupled blocks with rate-dependent friction law in presence of random seismic noise. Such a model sufficiently well describes a real fault motion, whose prevailing stochastic nature is implied by surrogate data analysis of available GPS measurements of active fault movement. Interaction of blocks in an analyzed model is studied as a function of time delay, observed both for dynamics of individual faults and phenomenological models. Analyzed model is examined as a system of all-to-all coupled blocks according to typical assumption of compound faults as complex of globally coupled segments. We apply numerical methods to show that there are local bifurcations from equilibrium state to periodic oscillations, with an occurrence of irregular aperiodic behavior when initial conditions are set away from the equilibrium point. Such a behavior indicates a possible existence of a bi-stable dynamical regime, due to effect of the introduced seismic noise or the existence of global attractor. The latter assumption is additionally confirmed by analyzing the corresponding mean-field approximated model. In this bi-stable regime, distribution of event magnitudes follows Gutenberg-Richter power law with satisfying statistical accuracy, including the b-value within the real observed range.

  15. A-Priori Rupture Models for Northern California Type-A Faults

    Science.gov (United States)

    Wills, Chris J.; Weldon, Ray J.; Field, Edward H.

    2008-01-01

    This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide

  16. A coherency function model of ground motion at base rock corresponding to strike-slip fault

    Institute of Scientific and Technical Information of China (English)

    丁海平; 刘启方; 金星; 袁一凡

    2004-01-01

    At present, the method to study spatial variation of ground motions is statistic analysis based on dense array records such as SMART-1 array, etc. For lacking of information of ground motions, there is no coherency function model of base rock and different style site. Spatial variation of ground motions in elastic media is analyzed by deterministic method in this paper. Taking elastic half-space model with dislocation source of fault, near-field ground motions are simulated. This model takes strike-slip fault and earth media into account. A coherency function is proposed for base rock site.

  17. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-01

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  18. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-26

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  19. Detection and diagnosis of bearing faults using shift-invariant dictionary learning and hidden Markov model

    Science.gov (United States)

    Zhou, Haitao; Chen, Jin; Dong, Guangming; Wang, Ran

    2016-05-01

    Many existing signal processing methods usually select a predefined basis function in advance. This basis functions selection relies on a priori knowledge about the target signal, which is always infeasible in engineering applications. Dictionary learning method provides an ambitious direction to learn basis atoms from data itself with the objective of finding the underlying structure embedded in signal. As a special case of dictionary learning methods, shift-invariant dictionary learning (SIDL) reconstructs an input signal using basis atoms in all possible time shifts. The property of shift-invariance is very suitable to extract periodic impulses, which are typical symptom of mechanical fault signal. After learning basis atoms, a signal can be decomposed into a collection of latent components, each is reconstructed by one basis atom and its corresponding time-shifts. In this paper, SIDL method is introduced as an adaptive feature extraction technique. Then an effective approach based on SIDL and hidden Markov model (HMM) is addressed for machinery fault diagnosis. The SIDL-based feature extraction is applied to analyze both simulated and experiment signal with specific notch size. This experiment shows that SIDL can successfully extract double impulses in bearing signal. The second experiment presents an artificial fault experiment with different bearing fault type. Feature extraction based on SIDL method is performed on each signal, and then HMM is used to identify its fault type. This experiment results show that the proposed SIDL-HMM has a good performance in bearing fault diagnosis.

  20. Adaptive Fault-Tolerant Routing in 2D Mesh with Cracky Rectangular Model

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2014-01-01

    Full Text Available This paper mainly focuses on routing in two-dimensional mesh networks. We propose a novel faulty block model, which is cracky rectangular block, for fault-tolerant adaptive routing. All the faulty nodes and faulty links are surrounded in this type of block, which is a convex structure, in order to avoid routing livelock. Additionally, the model constructs the interior spanning forest for each block in order to keep in touch with the nodes inside of each block. The procedure for block construction is dynamically and totally distributed. The construction algorithm is simple and ease of implementation. And this is a fully adaptive block which will dynamically adjust its scale in accordance with the situation of networks, either the fault emergence or the fault recovery, without shutdown of the system. Based on this model, we also develop a distributed fault-tolerant routing algorithm. Then we give the formal proof for this algorithm to guarantee that messages will always reach their destinations if and only if the destination nodes keep connecting with these mesh networks. So the new model and routing algorithm maximize the availability of the nodes in networks. This is a noticeable overall improvement of fault tolerability of the system.

  1. Formal and Informal Modeling of Fault Tolerant Noc Architectures

    Directory of Open Access Journals (Sweden)

    Mostefa BELARBI

    2015-11-01

    Full Text Available The suggested new approach based on B-Event formal technics consists of suggesting aspects and constraints related to the reliability of NoC (Network-On-chip and the over-cost related to the solutions of tolerances on the faults: a design of NoC tolerating on the faults for SoC (System-on-Chip containing configurable technology FPGA (Field Programmable Gates Array, by extracting the properties of the NoC architecture. We illustrate our methodology by developing several refinements which produce QNoC (Quality of Service of Network on chip switch architecture from specification to test. We will show how B-event formalism can follow life cycle of NoC design and test: for example the code VHDL (VHSIC Hardware Description Language simulation established of certain kind of architecture can help us to optimize the architecture and produce new architecture; we can inject the new properties related to the new QNoC architecture into formal B-event specification. B-event is associated to Rodin tool environment. As case study, the last stage of refinement used a wireless network in order to generate complete test environment of the studied application.

  2. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  3. Modeling of Morelia Fault Earthquake (Mw=5.4) source fault parameters using the coseismic ground deformation and groundwater level changes data

    Science.gov (United States)

    Sarychikhina, O.; Glowacka, E.; Mellors, R. J.; Vázquez, R.

    2009-12-01

    On 24 May 2006 at 04:20 (UTC) a moderate-size (Mw=5.4) earthquake struck the Mexicali Valley, Baja California, México, roughly 30 km to the southeast of the city of Mexicali, in the vicinity of the Cerro Prieto Geothermal Field (CPGF). The earthquake occurred on the Morelia fault, one of the east-dipping normal faults in the Mexicali Valley. Locally, this earthquake was strongly felt and caused minor damage. The event created 5 km of surface rupture and down-dip displacements of up to 25-30 cm were measured at some places along this surface rupture. Associated deformation was measured by vertical crackmeter, leveling profile, and Differential Synthetic Aperture Radar Interferometry (D-InSAR). A coseismic step-like groundwater level change was detected at 7 wells. The Mw=5.4 Morelia Fault earthquake had significant scientific interest, first, because of surprisingly strong effects for an earthquake of such size; second, the variability of coseismic effects data from different ground-based and space-based techniques which allows to the better constrain of the source fault parameters. Source parameters for the earthquake were estimated using forward modeling of both surface deformation data and static volume strain change (inferred from coseismic changes in groundwater level). All ground deformation data was corrected by anthropogenic component caused by the geothermal fluid exploitation in the CPGF. Modeling was based on finite rectangular fault embedded in an elastic media. The preferred fault model has a strike, rake, and dip of (48°, -89°, 45°) and has a length of 5.2 km, width of 6.7 km, and 34 cm of uniform slip. The geodetic moment, based on the modeled fault parameters, is 1.18E+17 Nm. The model matches the observed surface deformation, expected groundwater level changes, and teleseismic moment reasonably well and explains in part why the earthquake was so strongly felt in the area.

  4. Modeling and Simulation of Transient Fault Response at Lillgrund Wind Farm when Subjected to Faults in the Connecting 130 kV Grid

    Energy Technology Data Exchange (ETDEWEB)

    Eliasson, Anders; Isabegovic, Emir

    2009-07-01

    The purpose of this thesis was to investigate what type of faults in the connecting grid should be dimensioning for future wind farms. An investigation of over and under voltages at the main transformer and the turbines inside Lillgrund wind farm was the main goal. The results will be used in the planning stage of future wind farms when performing insulation coordination and determining the protection settings. A model of the Lillgrund wind farm and a part of the connecting 130 kV grid were built in PSCAD/EMTDC. The farm consists of 48 Siemens SWT-2.3-93 2.3 MW wind turbines with full power converters. The turbines were modeled as controllable current sources providing a constant active power output up to the current limit of 1.4 pu. The transmission lines and cables were modeled as frequency dependent (phase) models. The load flows and bus voltages were verified towards a PSS/E model and the transient response was verified towards measuring data from two faults, a line to line fault in the vicinity of Barsebaeck (BBK) and a single line-to-ground fault close to Bunkeflo (BFO) substation. For the simulation, three phase to ground, single line to ground and line to line faults were applied at different locations in the connecting grid and the phase to ground voltages at different buses in the connecting grid and at turbines were studied. These faults were applied for different configurations of the farm. For single line to ground faults, the highest over voltage on a turbine was 1.22 pu (32.87 kV) due to clearing of a fault at BFO (the PCC). For line to line faults, the highest over voltage on a turbine was 1.59 pu (42.83 kV) at the beginning of a fault at KGE one bus away from BFO. Both these cases were when all radials were connected and the turbines ran at full power. The highest over voltage observed at Lillgrund was 1.65 pu (44.45 kV). This over voltage was caused by a three phase to ground fault applied at KGE and occurred at the beginning of the fault and when

  5. Investigating the Effects of Stress Interaction Using a Cellular-automaton Based Model in Fault Networks of Varying Complexity.

    Science.gov (United States)

    Hetherington, A. P.; Steacy, S.; McCloskey, J.

    2007-12-01

    Seismicity spatial and temporal patterns are strongly influenced by stress interaction between faults. However the effects of such interaction on earthquake statistics is not yet well understood. Computer models provide accurate, large and complete datasets to investigate this issue and also have the benefit of allowing direct comparison of seismicity behavior in time and space in networks, with and without fault interaction. We investigate the effect of such interaction on modeled real-world fault networks of varying complexity using a cellular-automata based model. Each 3-D fault within the fault network is modeled by a discrete cellular automaton. The cell size is 1 km square which allows for a minimum earthquake size of approximately Mw=4. The cell strength is distributed fractally across each fault and all cells are loaded by a remote tectonic stressing rate. When the stress on a cell exceeds its strength, the cell fails and stress is transferred to its nearest neighbors which may in turn cause them to break allowing the earthquake to grow. These stress transfer rules allow realistic stress concentrations to develop at the boundary of the rupture. If the extent of the rupture exceeds a user defined minimum length, and if interaction between faults is allowed, a boundary element method is used to calculate stress transfer to neighboring faults. Here we present results from four simulated fault networks based on active faults in the San Francisco Bay Area, California, the Northern Anatolian Fault, Turkey, Southern California, and the Marlborough Fault System, South Island, New Zealand. These are chosen for their varying level of fault complexity and we examine both interacting and non-interacting models in terms of their b-value and recurrence intervals for each region. Results will be compared and discussed.

  6. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  7. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  8. Geomechanical modeling of stress and strain evolution during contractional fault-related folding

    Science.gov (United States)

    Smart, Kevin J.; Ferrill, David A.; Morris, Alan P.; McGinnis, Ronald N.

    2012-11-01

    Understanding stress states and rock mass deformation deep underground is critical to a range of endeavors including oil and gas exploration and production, geothermal reservoir characterization and management, and subsurface disposal of CO2. Geomechanical modeling can predict the onset of failure and the type and abundance of deformation features along with the orientations and magnitudes of stresses. This approach enables development of forward models that incorporate realistic mechanical stratigraphy (e.g., including competence contrasts, bed thicknesses, and bedding planes), include faults and bedding-slip surfaces as frictional sliding interfaces, reproduce the overall geometry of the fold structures of interest, and allow tracking of stress and strain through the deformation history. Use of inelastic constitutive relationships (e.g., elastic-plastic behavior) allows permanent strains to develop in response to the applied loads. This ability to capture permanent deformation is superior to linear elastic models, which are often used for numerical convenience, but are incapable of modeling permanent deformation or predicting permanent deformation processes such as faulting, fracturing, and pore collapse. Finite element modeling results compared with field examples of a natural contractional fault-related fold show that well-designed geomechanical modeling can match overall fold geometries and be applied to stress, fracture, and subseismic fault prediction in geologic structures. Geomechanical modeling of this type allows stress and strain histories to be obtained throughout the model domain.

  9. Modeling of a Switched Reluctance Motor under Stator Winding Fault Condition

    DEFF Research Database (Denmark)

    Chen, Hao; Han, G.; Yan, Wei

    2016-01-01

    A new method for modeling stator winding fault with one shorted coil in a switched reluctance motor (SRM) is presented in this paper. The method is based on artificial neural network (ANN), incorporated with a simple analytical model in electromagnetic analysis to estimate the flux...

  10. Model-based fault detection for generator cooling system in wind turbines using SCADA data

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Kinnaert, Michel

    2016-01-01

    In this work, an early fault detection system for the generator cooling of wind turbines is presented and tested. It relies on a hybrid model of the cooling system. The parameters of the generator model are estimated by an extended Kalman filter. The estimated parameters are then processed by an ...

  11. Three-dimensional numerical modelling of static and transient Coulomb stress changes on intra-continental dip-slip faults

    OpenAIRE

    Meike Bagge

    2017-01-01

    Earthquakes on intra-continental faults pose substantial seismic hazard to populated areas. The interaction of faults is an important mechanism of earthquake triggering and can be investigated by the calculation of Coulomb stress changes. Using three-dimensional finite-element models, co- and postseismic stress changes and the effect of viscoelastic relaxation on dip-slip faults are investigated. The models include elastic and viscoelastic layers, gravity, ongoing regional deformation as well...

  12. How realistic are flat-ramp-flat fault kinematic models? Comparing mechanical and kinematic models

    Science.gov (United States)

    Cruz, L.; Nevitt, J. M.; Hilley, G. E.; Seixas, G.

    2015-12-01

    Rock within the upper crust appears to deform according to elasto-plastic constitutive rules, but structural geologists often employ kinematic descriptions that prescribe particle motions irrespective of these physical properties. In this contribution, we examine the range of constitutive properties that are approximately implied by kinematic models by comparing predicted deformations between mechanical and kinematic models for identical fault geometric configurations. Specifically, we use the ABAQUS finite-element package to model a fault-bend-fold geometry using an elasto-plastic constitutive rule (the elastic component is linear and the plastic failure occurs according to a Mohr-Coulomb failure criterion). We varied physical properties in the mechanical model (i.e., Young's modulus, Poisson ratio, cohesion yield strength, internal friction angle, sliding friction angle) to determine the impact of each on the observed deformations, which were then compared to predictions of kinematic models parameterized with identical geometries. We found that a limited sub-set of physical properties were required to produce deformations that were similar to those predicted by the kinematic models. Specifically, mechanical models with low cohesion are required to allow the kink at the bottom of the flat-ramp geometry to remain stationary over time. Additionally, deformations produced by steep ramp geometries (30 degrees) are difficult to reconcile between the two types of models, while lower slope gradients better conform to the geometric assumptions. These physical properties may fall within the range of those observed in laboratory experiments, suggesting that particle motions predicted by kinematic models may provide an approximate representation of those produced by a physically consistent model under some circumstances.

  13. Sensor Network Data Fault Detection using Hierarchical Bayesian Space-Time Modeling

    OpenAIRE

    Ni, Kevin; Pottie, G J

    2009-01-01

    We present a new application of hierarchical Bayesian space-time (HBST) modeling: data fault detection in sensor networks primarily used in environmental monitoring situations. To show the effectiveness of HBST modeling, we develop a rudimentary tagging system to mark data that does not fit with given models. Using this, we compare HBST modeling against first order linear autoregressive (AR) modeling, which is a commonly used alternative due to its simplicity. We show that while HBST is mo...

  14. Modeling crustal deformation near active faults and volcanic centers: a catalog of deformation models and modeling approaches

    Science.gov (United States)

    Battaglia, Maurizio; ,; Peter, F.; Murray, Jessica R.

    2013-01-01

    This manual provides the physical and mathematical concepts for selected models used to interpret deformation measurements near active faults and volcanic centers. The emphasis is on analytical models of deformation that can be compared with data from the Global Positioning System (GPS) receivers, Interferometric synthetic aperture radar (InSAR), leveling surveys, tiltmeters and strainmeters. Source models include pressurized spherical, ellipsoidal, and horizontal penny-shaped geometries in an elastic, homogeneous, flat half-space. Vertical dikes and faults are described following the mathematical notation for rectangular dislocations in an elastic, homogeneous, flat half-space. All the analytical expressions were verified against numerical models developed by use of COMSOL Multyphics, a Finite Element Analysis software (http://www.comsol.com). In this way, typographical errors present were identified and corrected. Matlab scripts are also provided to facilitate the application of these models.

  15. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    Science.gov (United States)

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor.

  16. Robust fault diagnosis for non-Gaussian stochastic systems based on the rational square-root approximation model

    Institute of Scientific and Technical Information of China (English)

    YAO LiNa; WANG Hong

    2008-01-01

    The task of robust fault detection and diagnosis of stochastic distribution control (SDC) systems with uncertainties is to use the measured input and the system output PDFs to still obtain possible faults information of the system. Using the ra-tional square-root B-spline model to represent the dynamics between the output PDF and the input, in this paper, a robust nonlinear adaptive observer-based fault diagnosis algorithm is presented to diagnose the fault in the dynamic part of such systems with model uncertainties. When certain conditions are satisfied, the weight vector of the rational square-root B-spline model proves to be bounded. Conver-gency analysis is performed for the error dynamic system raised from robust fault detection and fault diagnosis phase. Computer simulations are given to demon-strate the effectiveness of the proposed algorithm.

  17. Effective confidence interval estimation of fault-detection process of software reliability growth models

    Science.gov (United States)

    Fang, Chih-Chiang; Yeh, Chun-Wu

    2016-09-01

    The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.

  18. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    Science.gov (United States)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  19. Modeling Surface Subsidence from Hydrocarbon Production and Induced Fault Slip in the Louisiana Coastal Zone

    Science.gov (United States)

    Mallman, E. P.; Zoback, M. D.

    2005-12-01

    Coastal wetland loss in southern Louisiana poses a great threat to the ecological and economic stability of the region. In the region of interest, wetland loss is a combination of land subsidence along with eustatic sea level rise, sediment accumulation, erosion, filling and drainage. More than half of the land loss in coastal Louisiana between 1932 and 1990 was related to subsidence due to the complicated interaction of multiple natural and anthropogenic processes, including compaction of Holocene sediments in the Mississippi River delta, lithospheric flexure as a response to sediment loading, and natural episodic movement along regional growth faults. In addition to these mechanisms, it has recently been suggested that subsurface oil and gas production may be a large contributing factor to surface subsidence in the Louisiana Coastal Zone. We model the effect of fluid withdrawal from oil and gas fields in the Barataria Bay region of the Louisiana Coastal Zone on surface subsidence and its potential role in inducing fault slip on the region's growth faults. Along the western edge of Barataria Basin is a first-order leveling line to constrain our model of land subsidence. The rates for this leveling line show numerous locations of increased subsidence rate over the surrounding area, which tend to be located over the large oil and gas fields in the region. However, also located in the regions of high subsidence rate and oil and gas fields are the regional normal faults. Slip on these growth faults is important in two contexts: Regional subsidence would be expected along these faults as a natural consequence of naturally-occurring slip over time. In addition, slip along the faults can be exacerbated by production such that surface subsidence would be localized near the oil and gas fields. Using pressure data from wells in the Valentine, Golden Meadow, and Leeville oil and gas fields we estimate the amount of compaction of the various reservoirs, the resulting surface

  20. A Fault-based Crustal Deformation Model for UCERF3 and Its Implication to Seismic Hazard Analysis

    Science.gov (United States)

    Zeng, Y.; Shen, Z.

    2012-12-01

    We invert GPS data to determine slip rates on major California faults using a fault-based crustal deformation model with geological slip rate constraints. The model assumes buried elastic dislocations across the region using fault geometries defined by the Uniform California Earthquake Rupture Forecast version 3 (UCERF3) project with fault segments slipping beneath their locking depths. GPS observations across California and neighboring states were obtained from the UNAVCO western US GPS velocity model and edited by the SCEC UCERF3 geodetic deformation working group. The geologic slip rates and fault style constraints were compiled by the SCEC UCERF3 geologic deformation working group. Continuity constraints are imposed on slips among adjacent fault segments to regulate slip variability and to simulate block-like motion. Our least-squares inversion shows that slip rates along the northern San Andreas fault system agree well with the geologic estimates provided by UCERF3, and slip rates for the Calaveras-Hayward-Maacama fault branch and the Greenville-Great Valley fault branch are slightly higher than that of the UCERF3 geologic model. The total slip rates across transects of the three fault branches in Northern California amount to 39 mm/yr. Slip rates determined for the Garlock fault closely match geologic rates. Slip rates for the Coachella Valley and Brawley segment of the San Andreas are nearly twice that of the San Jacinto fault branch. For the off-coast faults along the San Gregorio, Hosgri, Catalina, and San Clemente faults, slip rates are near their geologic lower bounds. Comparing with the regional geologic slip rate estimates, the GPS based model shows a significant decrease of 6-14 mm/yr in slip rates along the San Andreas fault system from the central California creeping section through the Mojave to the San Bernardino Mountain segments, whereas the model indicates significant increase of 1-3 mm/yr in slip-rates for faults along the east California

  1. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  2. Fault and dyke detectability in high resolution seismic surveys for coal: a view from numerical modelling*

    Science.gov (United States)

    Zhou, Binzhong 13Hatherly, Peter

    2014-10-01

    Modern underground coal mining requires certainty about geological faults, dykes and other structural features. Faults with throws of even just a few metres can create safety issues and lead to costly delays in mine production. In this paper, we use numerical modelling in an ideal, noise-free environment with homogeneous layering to investigate the detectability of small faults by seismic reflection surveying. If the layering is horizontal, faults with throws of 1/8 of the wavelength should be detectable in a 2D survey. In a coal mining setting where the seismic velocity of the overburden ranges from 3000 m/s to 4000 m/s and the dominant seismic frequency is ~100 Hz, this corresponds to a fault with a throw of 4-5 m. However, if the layers are dipping or folded, the faults may be more difficult to detect, especially when their throws oppose the trend of the background structure. In the case of 3D seismic surveying we suggest that faults with throws as small as 1/16 of wavelength (2-2.5 m) can be detectable because of the benefits offered by computer-aided horizon identification and the improved spatial coherence in 3D seismic surveys. With dykes, we find that Berkhout's definition of the Fresnel zone is more consistent with actual experience. At a depth of 500 m, which is typically encountered in coal mining, and a 100 Hz dominant seismic frequency, dykes less than 8 m in width are undetectable, even after migration.

  3. Rapid Assessment of Earthquakes with Radar and Optical Geodetic Imaging and Finite Fault Models (Invited)

    Science.gov (United States)

    Fielding, E. J.; Sladen, A.; Simons, M.; Rosen, P. A.; Yun, S.; Li, Z.; Avouac, J.; Leprince, S.

    2010-12-01

    Earthquake responders need to know where the earthquake has caused damage and what is the likely intensity of damage. The earliest information comes from global and regional seismic networks, which provide the magnitude and locations of the main earthquake hypocenter and moment tensor centroid and also the locations of aftershocks. Location accuracy depends on the availability of seismic data close to the earthquake source. Finite fault models of the earthquake slip can be derived from analysis of seismic waveforms alone, but the results can have large errors in the location of the fault ruptures and spatial distribution of slip, which are critical for estimating the distribution of shaking and damage. Geodetic measurements of ground displacements with GPS, LiDAR, or radar and optical imagery provide key spatial constraints on the location of the fault ruptures and distribution of slip. Here we describe the analysis of interferometric synthetic aperture radar (InSAR) and sub-pixel correlation (or pixel offset tracking) of radar and optical imagery to measure ground coseismic displacements for recent large earthquakes, and lessons learned for rapid assessment of future events. These geodetic imaging techniques have been applied to the 2010 Leogane, Haiti; 2010 Maule, Chile; 2010 Baja California, Mexico; 2008 Wenchuan, China; 2007 Tocopilla, Chile; 2007 Pisco, Peru; 2005 Kashmir; and 2003 Bam, Iran earthquakes, using data from ESA Envisat ASAR, JAXA ALOS PALSAR, NASA Terra ASTER and CNES SPOT5 satellite instruments and the NASA/JPL UAVSAR airborne system. For these events, the geodetic data provided unique information on the location of the fault or faults that ruptured and the distribution of slip that was not available from the seismic data and allowed the creation of accurate finite fault source models. In many of these cases, the fault ruptures were on previously unknown faults or faults not believed to be at high risk of earthquakes, so the area and degree of

  4. Characterized Fault Model of Scenario Earthquake Caused by the Itoigawa-Shizuoka Tectonic Line Fault Zone in Central Japan and Strong Ground Motion Prediction

    Science.gov (United States)

    Sato, T.; Dan, K.; Irikura, K.; Furumura, M.

    2001-12-01

    Based on the existing ideas on characterizing complex fault rupture process, we constructed four different characterized fault models for predicting strong motions from the most likely scenario earthquake along the active fault zone of the Itoigawa-Shizuoka Tectonic Line in central Japan. The Headquarters for Earthquake Research Promotion in Japanese government (2001) estimated that the earthquake (8 +/- 0.5) has the total fault length of 112 km with four segments. We assumed that the characterized fault model consisted of two regions: asperity and background (Somerville et al., 1999; Irikura, 2000; Dan et al., 2000). The main differences in the four fault models were 1) how to determine a seismic moment Mo from a fault rupture area S, 2) number of asperities N, 3) how to determine a stress parameter σ , and 4) fmax. We calculated broadband strong motions at three stations near the fault by a hybrid method of the semi-empirical and theoretical approaches. A comparison between the results from the hybrid method and those from empirical attenuation relations showed that the hybrid method using the characterized fault model could evaluate near-fault rupture directivity effects more reliably than the empirical attenuation relations. We also discussed the characterized fault models and the strong motion characteristics. The Mo extrapolated from the empirical Mo-S relation by Somerville et al. (1999) was a half of that determined from the mean value of the Wells and Coppersmith (1994) data. The latter Mo was consistent with that for the 1891 Nobi, Japan, earthquake whose fault length was almost the same as the length of the target earthquake. In addition, the fault model using the latter Mo produced a slip amount of about 6 m on the largest asperity, which was consistent with the displacement of 6 m to 9 m per event obtained from a trench survey. High-frequency strong motions were greatly influenced by the σ for the asperities (188 bars, 246 bars, 108 bars, and 134

  5. Discovery of previously unrecognised local faults in London, UK, using detailed 3D geological modelling

    Science.gov (United States)

    Aldiss, Don; Haslam, Richard

    2013-04-01

    In parts of London, faulting introduces lateral heterogeneity to the local ground conditions, especially where construction works intercept the Palaeogene Lambeth Group. This brings difficulties to the compilation of a ground model that is fully consistent with the ground investigation data, and so to the design and construction of engineering works. However, because bedrock in the London area is rather uniform at outcrop, and is widely covered by Quaternary deposits, few faults are shown on the geological maps of the area. This paper discusses a successful resolution of this problem at a site in east central London, where tunnels for a new underground railway station are planned. A 3D geological model was used to provide an understanding of the local geological structure, in faulted Lambeth Group strata, that had not been possible by other commonly-used methods. This model includes seven previously unrecognised faults, with downthrows ranging from about 1 m to about 12 m. The model was constructed in the GSI3D geological modelling software using about 145 borehole records, including many legacy records, in an area of 850 m by 500 m. The basis of a GSI3D 3D geological model is a network of 2D cross-sections drawn by a geologist, generally connecting borehole positions (where the borehole records define the level of the geological units that are present), and outcrop and subcrop lines for those units (where shown by a geological map). When the lines tracing the base of each geological unit within the intersecting cross-sections are complete and mutually consistent, the software is used to generate TIN surfaces between those lines, so creating a 3D geological model. Even where a geological model is constructed as if no faults were present, changes in apparent dip between two data points within a single cross-section can indicate that a fault is present in that segment of the cross-section. If displacements of similar size with the same polarity are found in a series

  6. Model-based robust estimation and fault detection for MEMS-INS/GPS integrated navigation systems

    Directory of Open Access Journals (Sweden)

    Miao Lingjuan

    2014-08-01

    Full Text Available In micro-electro-mechanical system based inertial navigation system (MEMS-INS/global position system (GPS integrated navigation systems, there exist unknown disturbances and abnormal measurements. In order to obtain high estimation accuracy and enhance detection sensitivity to faults in measurements, this paper deals with the problem of model-based robust estimation (RE and fault detection (FD. A filter gain matrix and a post-filter are designed to obtain a RE and FD algorithm with current measurements, which is different from most of the existing priori filters using measurements in one-step delay. With the designed filter gain matrix, the H-infinity norm of the transfer function from noise inputs to estimation error outputs is limited within a certain range; with the designed post-filter, the residual signal is robust to disturbances but sensitive to faults. Therefore, the algorithm can guarantee small estimation errors in the presence of disturbances and have high sensitivity to faults. The proposed method is evaluated in an integrated navigation system, and the simulation results show that it is more effective in position estimation and fault signal detection than priori RE and FD algorithms.

  7. Model-based robust estimation and fault detection for MEMS-INS/GPS integrated navigation systems

    Institute of Scientific and Technical Information of China (English)

    Miao Lingjuan; Shi Jing

    2014-01-01

    In micro-electro-mechanical system based inertial navigation system (MEMS-INS)/global position system (GPS) integrated navigation systems, there exist unknown disturbances and abnormal measurements. In order to obtain high estimation accuracy and enhance detection sensitivity to faults in measurements, this paper deals with the problem of model-based robust esti-mation (RE) and fault detection (FD). A filter gain matrix and a post-filter are designed to obtain a RE and FD algorithm with current measurements, which is different from most of the existing pri-ori filters using measurements in one-step delay. With the designed filter gain matrix, the H-infinity norm of the transfer function from noise inputs to estimation error outputs is limited within a certain range;with the designed post-filter, the residual signal is robust to disturbances but sensitive to faults. Therefore, the algorithm can guarantee small estimation errors in the presence of distur-bances and have high sensitivity to faults. The proposed method is evaluated in an integrated navigation system, and the simulation results show that it is more effective in position estimation and fault signal detection than priori RE and FD algorithms.

  8. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model

    Science.gov (United States)

    Daout, S.; Barbot, S.; Peltzer, G.; Doin, M.-P.; Liu, Z.; Jolivet, R.

    2016-11-01

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  9. Accelerating k-t sparse using k-space aliasing for dynamic MRI imaging.

    Science.gov (United States)

    Pawar, Kamlesh; Egan, Gary F; Zhang, Jingxin

    2013-01-01

    Dynamic imaging is challenging in MRI and acceleration techniques are usually needed to acquire dynamic scene. K-t sparse is an acceleration technique based on compressed sensing, it acquires fewer amounts of data in k-t space by pseudo random ordering of phase encodes and reconstructs dynamic scene by exploiting sparsity of k-t space in transform domain. Another recently introduced technique accelerates dynamic MRI scans by acquiring k-space data in aliased form. K-space aliasing technique uses multiple RF excitation pulses to deliberately acquire aliased k-space data. During reconstruction a simple Fourier transformation along time frames can unaliase the acquired aliased data. This paper presents a novel method to combine k-t sparse and k-space aliasing to achieve higher acceleration than each of the individual technique alone. In this particular combination, a very critical factor of compressed sensing, the ratio of the number of acquired phase encodes to the number of total phase encode (n/N) increases therefore compressed sensing component of reconstruction performs exceptionally well. Comparison of k-t sparse and the proposed technique for acceleration factors of 4, 6 and 8 is demonstrated in simulation on cardiac data.

  10. Modeling the Non Linear Behavior of a Magnetic Fault Current Limiter

    Directory of Open Access Journals (Sweden)

    P. R. Wilson

    2015-11-01

    Full Text Available Fault Current Limiters are used in a wide array of applications from small circuit protection at low power levels to large scale high power applications which require superconductors and complex control circuitry. One advantage of  passive fault current limiters (FCL is the automatic behavior that is dependent on the intrinsic properties of the circuit elements rather than on a complex feedback control scheme making this approach attractive for low cost applications and also where reliability is critical. This paper describes the behavioral modeling of a passive Magnetic FCL and its potential application in practical circuits.

  11. Prognosticating fault development rate in wind turbine generator bearings using local trend models

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Palou, Jonel; Sweeney, Christian Walsted;

    2016-01-01

    Generator bearing defects, e.g. ball, inner and outer race defects, are ranked among the most frequent mechanical failures encountered in wind turbines. Diagnosis and prognosis of bearing faults can be successfully implemented using vibration based condition monitoring systems, where tracking...... the signal energy between 10Hz to 1000Hz is utilized as feature to characterize the severity of developing bearing faults. Furthermore, local trend models are employed to predict the progression of bearing defects from a vibration standpoint in accordance with the limits suggested in ISO 10816. Predictions...... of vibration trends from multi-megawatt wind turbine generators are presented, showing the effectiveness of the suggested approach on the calculation of the RUL and fault progression rate....

  12. Probabilistic SDG model description and fault inference for large-scale complex systems

    Institute of Scientific and Technical Information of China (English)

    Yang Fan; Xiao Deyun

    2006-01-01

    Large-scale complex systems have the feature of including large amount of variables that have complex relationships, for which signed directed graph (SDG) model could serve as a significant tool by describing the causal relationships among variables. Although qualitative SDG expresses the causing effects between variables easily and clearly, it has many disadvantages or limitations. Probabilistic SDG proposed in the article describes deliver relationships among faults and variables by conditional probabilities, which contains more information and performs more applicability. The article introduces the concepts and construction approaches of probabilistic SDG, and presents the inference approaches aiming at fault diagnosis in this framework, i.e. Bayesian inference with graph elimination or junction tree algorithms to compute fault probabilities. Finally, the probabilistic SDG of a typical example of 65t/h boiler system is given.

  13. Numerical model of the glacially-induced intraplate earthquakes and faults formation

    Science.gov (United States)

    Petrunin, Alexey; Schmeling, Harro

    2016-04-01

    According to the plate tectonics, main earthquakes are caused by moving lithospheric plates and are located mainly at plate boundaries. However, some of significant seismic events may be located far away from these active areas. The nature of the intraplate earthquakes remains unclear. It is assumed, that the triggering of seismicity in the eastern Canada and northern Europe might be a result of the glacier retreat during a glacial-interglacial cycle (GIC). Previous numerical models show that the impact of the glacial loading and following isostatic adjustment is able to trigger seismicity in pre-existing faults, especially during deglaciation stage. However this models do not explain strong glaciation-induced historical earthquakes (M5-M7). Moreover, numerous studies report connection of the location and age of major faults in the regions undergone by glaciation during last glacial maximum with the glacier dynamics. This probably imply that the GIC might be a reason for the fault system formation. Our numerical model provides analysis of the strain-stress evolution during the GIC using the finite volume approach realised in the numerical code Lapex 2.5D which is able to operate with large strains and visco-elasto-plastic rheology. To simulate self-organizing faults, the damage rheology model is implemented within the code that makes possible not only visualize faulting but also estimate energy release during the seismic cycle. The modeling domain includes two-layered crust, lithospheric mantle and the asthenosphere that makes possible simulating elasto-plastic response of the lithosphere to the glaciation-induced loading (unloading) and viscous isostatic adjustment. We have considered three scenarios for the model: horizontal extension, compression and fixed boundary conditions. Modeling results generally confirm suppressing seismic activity during glaciation phases whereas retreat of a glacier triggers earthquakes for several thousand years. Tip of the glacier

  14. Fault Detection for Shipboard Monitoring – Volterra Kernel and Hammerstein Model Approaches

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2009-01-01

    In this paper nonlinear fault detection for in-service monitoring and decision support systems for ships will be presented. The ship is described as a nonlinear system, and the stochastic wave elevation and the associated ship responses are conveniently modelled in frequency domain...

  15. Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators

    Directory of Open Access Journals (Sweden)

    Ilias Ouachtouk

    2016-01-01

    Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.

  16. Natural Environment Modeling and Fault-Diagnosis for Automated Agricultural Vehicle

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2008-01-01

    This paper presents results for an automatic navigation system for agricultural vehicles. The system uses stereo-vision, inertial sensors and GPS. Special emphasis has been placed on modeling the natural environment in conjunction with a fault-tolerant navigation system. The results are exemplified...

  17. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  18. Two-Dimensional Boundary Element Method Application for Surface Deformation Modeling around Lembang and Cimandiri Fault, West Java

    Science.gov (United States)

    Mahya, M. J.; Sanny, T. A.

    2017-04-01

    Lembang and Cimandiri fault are active faults in West Java that thread people near the faults with earthquake and surface deformation risk. To determine the deformation, GPS measurements around Lembang and Cimandiri fault was conducted then the data was processed to get the horizontal velocity at each GPS stations by Graduate Research of Earthquake and Active Tectonics (GREAT) Department of Geodesy and Geomatics Engineering Study Program, ITB. The purpose of this study is to model the displacement distribution as deformation parameter in the area along Lembang and Cimandiri fault using 2-dimensional boundary element method (BEM) using the horizontal velocity that has been corrected by the effect of Sunda plate horizontal movement as the input. The assumptions that used at the modeling stage are the deformation occurs in homogeneous and isotropic medium, and the stresses that acted on faults are in elastostatic condition. The results of modeling show that Lembang fault had left-lateral slip component and divided into two segments. A lineament oriented in southwest-northeast direction is observed near Tangkuban Perahu Mountain separating the eastern and the western segments of Lembang fault. The displacement pattern of Cimandiri fault shows that Cimandiri fault is divided into the eastern segment with right-lateral slip component and the western segment with left-lateral slip component separated by a northwest-southeast oriented lineament at the western part of Gede Pangrango Mountain. The displacement value between Lembang and Cimandiri fault is nearly zero indicating that Lembang and Cimandiri fault are not connected each other and this area is relatively safe for infrastructure development.

  19. Smac–Fdi: A Single Model Active Fault Detection and Isolation System for Unmanned Aircraft

    Directory of Open Access Journals (Sweden)

    Ducard Guillaume J.J.

    2015-03-01

    Full Text Available This article presents a single model active fault detection and isolation system (SMAC-FDI which is designed to efficiently detect and isolate a faulty actuator in a system, such as a small (unmanned aircraft. This FDI system is based on a single and simple aerodynamic model of an aircraft in order to generate some residuals, as soon as an actuator fault occurs. These residuals are used to trigger an active strategy based on artificial exciting signals that searches within the residuals for the signature of an actuator fault. Fault isolation is carried out through an innovative mechanism that does not use the previous residuals but the actuator control signals directly. In addition, the paper presents a complete parameter-tuning strategy for this FDI system. The novel concepts are backed-up by simulations of a small unmanned aircraft experiencing successive actuator failures. The robustness of the SMAC-FDI method is tested in the presence of model uncertainties, realistic sensor noise and wind gusts. Finally, the paper concludes with a discussion on the computational efficiency of the method and its ability to run on small microcontrollers.

  20. Simulation of Fault Arc Based on Different Radiation Models in a Closed Tank

    Science.gov (United States)

    Li, Mei; Zhang, Junpeng; Hu, Yang; Zhang, Hantian; Wu, Yifei

    2016-05-01

    This paper focuses on the simulation of a fault arc in a closed tank based on the magneto-hydrodynamic (MHD) method, in which a comparative study of three radiation models, including net emission coefficients (NEC), semi-empirical model based on NEC as well as the P1 model, is developed. The pressure rise calculated by the three radiation models are compared to the measured results. Particularly when the semi-empirical model is used, the effect of different boundary temperatures of the re-absorption layer in the semi-empirical model on pressure rise is concentrated on. The results show that the re-absorption effect in the low-temperature region affects radiation transfer of fault arcs evidently, and thus the internal pressure rise. Compared with the NEC model, P1 and the semi-empirical model with 0.7pressure rise of the fault arc, where is an adjusted parameter involving the boundary temperature of the re-absorption region in the semi-empirical model. supported by National Key Basic Research Program of China (973 Program) (No. 2015CB251002), National Natural Science Foundation of China (Nos. 51221005, 51177124), the Fundamental Research Funds for the Central Universities, the Program for New Century Excellent Talents in University and Shaanxi Province Natural Science Foundation of China (No. 2013JM-7010)

  1. Simulation of Fault Arc Based on Different Radiation Models in a Closed Tank

    Institute of Scientific and Technical Information of China (English)

    LI Mei; ZHANG Junpeng; HU Yang; ZHANG Hantian; WU Yifei

    2016-01-01

    This paper focuses on the simulation of a fault arc in a closed tank based on the magneto-hydrodynamic (MHD) method,in which a comparative study of three radiation models,including net emission coefficients (NEC),semi-empirical model based on NEC as well as the P1 model,is developed.The pressure rise calculated by the three radiation models are compared to the measured results.Particularly when the semi-empirical model is used,the effect of different boundary temperatures of the re-absorption layer in the semi-empirical model on pressure rise is concentrated on.The results show that the re-absorption effect in the low-temperature region affects radiation transfer of fault arcs evidently,and thus the internal pressure rise.Compared with the NEC model,P1 and the semi-empirical model with 0.7 < α < 0.83 are more suitable to calculate the pressure rise of the fault arc,where is an adjusted parameter involving the boundary temperature of the re-absorption region in the semi-empirical model.

  2. Three-dimensional numerical modeling of the influence of faults on groundwater flow at Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Andrew J.B. [Univ. of California, Berkeley, CA (United States)

    1999-06-01

    Numerical simulations of groundwater flow at Yucca Mountain, Nevada are used to investigate how the faulted hydrogeologic structure influences groundwater flow from a proposed high-level nuclear waste repository. Simulations are performed using a 3-D model that has a unique grid block discretization to accurately represent the faulted geologic units, which have variable thicknesses and orientations. Irregular grid blocks enable explicit representation of these features. Each hydrogeologic layer is discretized into a single layer of irregular and dipping grid blocks, and faults are discretized such that they are laterally continuous and displacement varies along strike. In addition, the presence of altered fault zones is explicitly modeled, as appropriate. The model has 23 layers and 11 faults, and approximately 57,000 grid blocks and 200,000 grid block connections. In the past, field measurement of upward vertical head gradients and high water table temperatures near faults were interpreted as indicators of upwelling from a deep carbonate aquifer. Simulations show, however, that these features can be readily explained by the geometry of hydrogeologic layers, the variability of layer permeabilities and thermal conductivities, and by the presence of permeable fault zones or faults with displacement only. In addition, a moderate water table gradient can result from fault displacement or a laterally continuous low permeability fault zone, but not from a high permeability fault zone, as others postulated earlier. Large-scale macrodispersion results from the vertical and lateral diversion of flow near the contact of high and low permeability layers at faults, and from upward flow within high permeability fault zones. Conversely, large-scale channeling can occur due to groundwater flow into areas with minimal fault displacement. Contaminants originating at the water table can flow in a direction significantly different than that of the water table gradient, and isolated

  3. Impact of aliasing frequency on multiscale feature extraction for fMRI data analysis%混频对多尺度特征提取方法分析fMRI数据的影响

    Institute of Scientific and Technical Information of China (English)

    支联合; 支羽光; 谭永杰

    2011-01-01

    Objective To discuss the impact of aliasing frequency on the performance of multiscale feature extraction (MFE) for fMRI data analysis. Methods Under the conditions of removing and not removing aliasing frequencies. MFE was employed to analyze the simulated and the auditory fMRI data. In addition, the results revealed by MFE were compared with those of the general linear model (GLM) implemented with SPM8 software. Results Whether removing aliasing frequencies or not, MFE showed the same specificity as that of GLM. However, in terms of the sensitivity, the performance of MFE without removing aliasing frequencies was better than that of MFE when removing aliasing frequencies, and the latter was better than that of GLM. Conclusion In case of correlation analysis employed, aliasing frequencies do not influence the specificity of MFE, while removing these frequencies will decrease its sensitivity.%目的 探讨混频对多尺度特征提取(MFE)方法分析fMRI数据的影响.方法 分别在去除和不去除混频条件下用MFE分析模拟数据及听觉fMRI试验数据,并与由SPM8软件运行的广义线性模型(GLM)方法的结果进行比较.结果 MFE在去除和不去除混频两种条件下的特异度均与GLM相同,但MFE不去除混频时的灵敏度优于去除混频时的灵敏度,后者又优于GLM.结论 在使用相关分析检测激活的条件下,混频不影响MFE的特异度,但去除混频降低其灵敏度.

  4. Physical Models of a Locked-to-Creeping Transition Along a Strike-Slip Fault: Comparison with the San Andreas Fault System in Central California

    Science.gov (United States)

    Ross, E. O.; Titus, S.; Reber, J. E.

    2016-12-01

    In central California, the plate boundary geometry of the San Andreas is relatively simple with several sub-parallel faults; however, slip behavior along the San Andreas fault changes from locked to creeping. In the SE, the fault is locked along the Carrizo segment, which last ruptured in the 1857 Fort Tejon earthquake. Towards the NW, the slip rates increase from 0 to 28 mm/yr along the creeping segment, before decreasing towards the locked segment that last ruptured in the 1906 San Francisco earthquake. Near the southern transition from locked behavior to creeping behavior, the GPS velocity field and simple elastic models predict a region of contraction NE of the fault. This region coincides with numerous well-developed folds in the borderlands as well as a series of off-fault earthquakes in the 1980s. Similarly, a region of extension is predicted SW of the transition. This area coincides with a large basin near the town of Paso Robles. In order to understand the development of these regions of contraction and extension and characterize the orientation of vectors in the velocity field, we model the transition from locked to creeping behavior using physical experiments. The model consists of a layer of silicone (PDMS SGM-36) and a layer of wet kaolin, mimicking the ductile lower crust and brittle upper crust. We cut and lubricate the silicone along one section of the basement fault, simulating creeping behavior, while leaving the rest of the silicone intact across the fault to represent the locked portion. With this simple alteration to experimental conditions, we are consistently able to produce a mountain-and-basin pair that forms on either side of the transition at a deformation speed of 0.22mm/sec. To compare the physical model's results to the observed velocity field, we use particle image velocimetry software in conjunction with strain computation software (SSPX). PIV analysis shows highly reproducible vectors, allowing us to examine off-fault deformation

  5. Modified Quasi-Steady State Model of DC System for Transient Stability Simulation under Asymmetric Faults

    Directory of Open Access Journals (Sweden)

    Jun Liu

    2015-01-01

    Full Text Available As using the classical quasi-steady state (QSS model could not be able to accurately simulate the dynamic characteristics of DC transmission and its controlling systems in electromechanical transient stability simulation, when asymmetric fault occurs in AC system, a modified quasi-steady state model (MQSS is proposed. The model firstly analyzes the calculation error induced by classical QSS model under asymmetric commutation voltage, which is mainly caused by the commutation voltage zero offset thus making inaccurate calculation of the average DC voltage and the inverter extinction advance angle. The new MQSS model calculates the average DC voltage according to the actual half-cycle voltage waveform on the DC terminal after fault occurrence, and the extinction advance angle is also derived accordingly, so as to avoid the negative effect of the asymmetric commutation voltage. Simulation experiments show that the new MQSS model proposed in this paper has higher simulation precision than the classical QSS model when asymmetric fault occurs in the AC system, by comparing both of them with the results of detailed electromagnetic transient (EMT model of the DC transmission and its controlling system.

  6. Empirical Verification of Fault Models for FPGAs Operating in the Subcritical Voltage Region

    DEFF Research Database (Denmark)

    Birklykke, Alex Aaen; Koch, Peter; Prasad, Ramjee

    2013-01-01

    fault models might provide insight that would allow subcritical scaling by changing digital design practices or by simply accepting errors if possible. To facilitate further work in this direction, we present probabilistic error models that allow us to link error behavior with statistical properties...... of the binary signals, and based on a two-FPGA setup we experimentally verify the correctness of candidate models. For all experiments, the observed error rates exhibit a polynomial dependency on outcome probability of the binary inputs, which corresponds to the behavior predicted by the proposed timing error...... model. Furthermore, our results show that the fault mechanism is fully deterministic - mimicking temporary stuck-at errors. As a result, given knowledge about a given signal, errors are fully predictable in the subcritical voltage region....

  7. Early FDI Based on Residuals Design According to the Analysis of Models of Faults: Application to DAMADICS

    Directory of Open Access Journals (Sweden)

    Yahia Kourd

    2011-01-01

    Full Text Available The increased complexity of plants and the development of sophisticated control systems have encouraged the parallel development of efficient rapid fault detection and isolation (FDI systems. FDI in industrial system has lately become of great significance. This paper proposes a new technique for short time fault detection and diagnosis in nonlinear dynamic systems with multi inputs and multi outputs. The main contribution of this paper is to develop a FDI schema according to reference models of fault-free and faulty behaviors designed with neural networks. Fault detection is obtained according to residuals that result from the comparison of measured signals with the outputs of the fault free reference model. Then, Euclidean distance from the outputs of models of faults to the measurements leads to fault isolation. The advantage of this method is to provide not only early detection but also early diagnosis thanks to the parallel computation of the models of faults and to the proposed decision algorithm. The effectiveness of this approach is illustrated with simulations on DAMADICS benchmark.

  8. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  9. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    Science.gov (United States)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  10. Kurtosis based weighted sparse model with convex optimization technique for bearing fault diagnosis

    Science.gov (United States)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yan, Ruqiang

    2016-12-01

    The bearing failure, generating harmful vibrations, is one of the most frequent reasons for machine breakdowns. Thus, performing bearing fault diagnosis is an essential procedure to improve the reliability of the mechanical system and reduce its operating expenses. Most of the previous studies focused on rolling bearing fault diagnosis could be categorized into two main families, kurtosis-based filter method and wavelet-based shrinkage method. Although tremendous progresses have been made, their effectiveness suffers from three potential drawbacks: firstly, fault information is often decomposed into proximal frequency bands and results in impulsive feature frequency band splitting (IFFBS) phenomenon, which significantly degrades the performance of capturing the optimal information band; secondly, noise energy spreads throughout all frequency bins and contaminates fault information in the information band, especially under the heavy noisy circumstance; thirdly, wavelet coefficients are shrunk equally to satisfy the sparsity constraints and most of the feature information energy are thus eliminated unreasonably. Therefore, exploiting two pieces of prior information (i.e., one is that the coefficient sequences of fault information in the wavelet basis is sparse, and the other is that the kurtosis of the envelope spectrum could evaluate accurately the information capacity of rolling bearing faults), a novel weighted sparse model and its corresponding framework for bearing fault diagnosis is proposed in this paper, coined KurWSD. KurWSD formulates the prior information into weighted sparse regularization terms and then obtains a nonsmooth convex optimization problem. The alternating direction method of multipliers (ADMM) is sequentially employed to solve this problem and the fault information is extracted through the estimated wavelet coefficients. Compared with state-of-the-art methods, KurWSD overcomes the three drawbacks and utilizes the advantages of both family

  11. A fault diagnosis methodology for rolling element bearings based on advanced signal pretreatment and autoregressive modelling

    Science.gov (United States)

    Al-Bugharbee, Hussein; Trendafilova, Irina

    2016-05-01

    This study proposes a methodology for rolling element bearings fault diagnosis which gives a complete and highly accurate identification of the faults present. It has two main stages: signals pretreatment, which is based on several signal analysis procedures, and diagnosis, which uses a pattern-recognition process. The first stage is principally based on linear time invariant autoregressive modelling. One of the main contributions of this investigation is the development of a pretreatment signal analysis procedure which subjects the signal to noise cleaning by singular spectrum analysis and then stationarisation by differencing. So the signal is transformed to bring it close to a stationary one, rather than complicating the model to bring it closer to the signal. This type of pretreatment allows the use of a linear time invariant autoregressive model and improves its performance when the original signals are non-stationary. This contribution is at the heart of the proposed method, and the high accuracy of the diagnosis is a result of this procedure. The methodology emphasises the importance of preliminary noise cleaning and stationarisation. And it demonstrates that the information needed for fault identification is contained in the stationary part of the measured signal. The methodology is further validated using three different experimental setups, demonstrating very high accuracy for all of the applications. It is able to correctly classify nearly 100 percent of the faults with regard to their type and size. This high accuracy is the other important contribution of this methodology. Thus, this research suggests a highly accurate methodology for rolling element bearing fault diagnosis which is based on relatively simple procedures. This is also an advantage, as the simplicity of the individual processes ensures easy application and the possibility for automation of the entire process.

  12. Fuzzy inferencing to identify degree of interaction in the development of fault prediction models

    Directory of Open Access Journals (Sweden)

    Rinkaj Goyal

    2017-01-01

    One related objective is the identification of influential metrics in the development of fault prediction models. A fuzzy rule intrinsically represents a form of interaction between fuzzified inputs. Analysis of these rules establishes that Low and NOT (High level of inheritance based metrics significantly contributes to the F-measure estimate of the model. Further, the Lack of Cohesion of Methods (LCOM metric was found insignificant in this empirical study.

  13. Nonlinear dynamic modeling of a helicopter planetary gear train for carrier plate crack fault diagnosis

    OpenAIRE

    Fan Lei; Wang Shaoping; Wang Xingjian; Han Feng; Lyu Huawei

    2016-01-01

    Planetary gear train plays a significant role in a helicopter operation and its health is of great importance for the flight safety of the helicopter. This paper investigates the effects of a planet carrier plate crack on the dynamic characteristics of a planetary gear train, and thus finds an effective method to diagnose crack fault. A dynamic model is developed to analyze the torsional vibration of a planetary gear train with a cracked planet carrier plate. The model takes into consideratio...

  14. Dynamic system identification and model-based fault diagnosis of an industrial gas turbine prototype

    Energy Technology Data Exchange (ETDEWEB)

    Simani, S. [Universita di Ferrara (Italy). Dipartimento di Ingegneria; Fantuzzi, C. [Universita di Modena e Reggio Emilia (Italy). Dipartimento di Scienze e Metodi per l' Ingegneria

    2006-07-15

    In this paper, a model-based procedure exploiting analytical redundancy for the detection and isolation of faults on a gas turbine process is presented. The main point of the present work consists of exploiting system identification schemes in connection with observer and filter design procedures for diagnostic purpose. Linear model identification (black-box modelling) and output estimation (dynamic observers and Kalman filters) integrated approaches to fault diagnosis are in particular advantageous in terms of solution complexity and performance. This scheme is especially useful when robust solutions are considered for minimising the effects of modelling errors and noise, while maximising fault sensitivity. A model of the process under investigation is obtained by identification procedures, whilst the residual generation task is achieved by means of output observers and Kalman filters designed in both noise-free and noisy assumptions. The proposed tools have been tested on a single-shaft industrial gas turbine prototype model and they have been evaluated using non-linear simulations, based on the gas turbine data. (author)

  15. Phenomenological models of vibration signals for condition monitoring and fault diagnosis of epicyclic gearboxes

    Science.gov (United States)

    Lei, Yaguo; Liu, Zongyao; Lin, Jing; Lu, Fanbo

    2016-05-01

    Condition monitoring and fault diagnosis of epicyclic gearboxes using vibration signals are not as straightforward as that of fixed-axis gearboxes since epicyclic gearboxes behave quite differently from fixed-axis gearboxes in many aspects, like spectral structures. Aiming to present the spectral structures of vibration signals of epicyclic gearboxes, phenomenological models of vibration signals of epicyclic gearboxes are developed by algebraic equations and spectral structures of these models are deduced using Fourier series analysis. In the phenomenological models, all the possible vibration transfer paths from gear meshing points to a fixed transducer and the effects of angular shifts of planet gears on the spectral structures are considered. Accordingly, time-varying vibration transfer paths from sun-planet/ring-planet gear meshing points to the fixed transducer due to carrier rotation are given by window functions with different amplitudes. And an angular shift in one planet gear position is introduced in the process of modeling. After the theoretical derivations, three experiments are conducted on an epicyclic gearbox test rig and the spectral structures of collected vibration signals are analyzed. As a result, the effects of angular shifts of planet gears are verified, and the phenomenological models of vibration signals when a local fault occurs on the sun gear and the planet gear are validated, respectively. The experiment results demonstrate that the established phenomenological models in this paper are helpful to the condition monitoring and fault diagnosis of epicyclic gearboxes.

  16. Predictive Modeling of a Two-Stage Gearbox towards Fault Detection

    Directory of Open Access Journals (Sweden)

    Edward J. Diehl

    2016-01-01

    Full Text Available This paper presents a systematic approach to the modeling and analysis of a benchmark two-stage gearbox test bed to characterize gear fault signatures when processed with harmonic wavelet transform (HWT analysis. The eventual goal of condition monitoring is to be able to interpret vibration signals from nonstationary machinery in order to identify the type and severity of gear damage. To advance towards this goal, a lumped-parameter model that can be analyzed efficiently is developed which characterizes the gearbox vibratory response at the system level. The model parameters are identified through correlated numerical and experimental investigations. The model fidelity is validated first by spectrum analysis, using constant speed experimental data, and secondly by HWT analysis, using nonstationary experimental data. Model prediction and experimental data are compared for healthy gear operation and a seeded fault gear with a missing tooth. The comparison confirms that both the frequency content and the predicted, relative response magnitudes match with physical measurements. The research demonstrates that the modeling method in combination with the HWT data analysis has the potential for facilitating successful fault detection and diagnosis for gearbox systems.

  17. FUNCTIONAL MODELLING FOR FAULT DIAGNOSIS AND ITS APPLICATION FOR NPP

    Directory of Open Access Journals (Sweden)

    MORTEN LIND

    2014-12-01

    Full Text Available The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM, which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  18. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  19. 3D Fault modeling of the active Chittagong-Myanmar fold belt, Bangladesh

    Science.gov (United States)

    Peterson, D. E.; Hubbard, J.; Akhter, S. H.; Shamim, N.

    2013-12-01

    The Chittagong-Myanmar fold belt (CMFB), located in eastern Bangladesh, eastern India and western Myanmar, accommodates east-west shortening at the India-Burma plate boundary. Oblique subduction of the Indian Plate beneath the Burma Plate since the Eocene has led to the development of a large accretionary prism complex, creating a series of north-south trending folds. A continuous sediment record from ~55 Ma to the present has been deposited in the Bengal Basin by the Ganges-Brahmaputra-Meghna rivers, providing an opportunity to learn about the history of tectonic deformation and activity in this fold-and-thrust belt. Surface mapping indicates that the fold-and-thrust belt is characterized by extensive N-S-trending anticlines and synclines in a belt ~150-200 km wide. Seismic reflection profiles from the Chittagong and Chittagong Hill Tracts, Bangladesh, indicate that the anticlines mapped at the surface narrow with depth and extend to ~3.0 seconds TWTT (two-way travel time), or ~6.0 km. The folds of Chittagong and Chittagong Hill Tracts are characterized by doubly plunging box-shaped en-echelon anticlines separated by wide synclines. The seismic data suggest that some of these anticlines are cored by thrust fault ramps that extend to a large-scale décollement that dips gently to the east. Other anticlines may be the result of detachment folding from the same décollement. The décollement likely deepens to the east and intersects with the northerly-trending, oblique-slip Kaladan fault. The CMFB region is bounded to the north by the north-dipping Dauki fault and the Shillong Plateau. The tectonic transition from a wide band of E-W shortening in the south to a narrow zone of N-S shortening along the Dauki fault is poorly understood. We integrate surface and subsurface datasets, including topography, geological maps, seismicity, and industry seismic reflection profiles, into a 3D modeling environment and construct initial 3D surfaces of the major faults in this

  20. Laboratory measurements of the relative permeability of cataclastic fault rocks: An important consideration for production simulation modelling

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hinai, Suleiman; Fisher, Quentin J. [School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom); Al-Busafi, Bader [Petroleum Development of Oman, MAF, Sultanate of Oman, Muscat (Oman); Guise, Phillip; Grattoni, Carlos A. [Rock Deformation Research Limited, School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2008-06-15

    It is becoming increasingly common practice to model the impact of faults on fluid flow within petroleum reservoirs by applying transmissibility multipliers, calculated from the single-phase permeability of fault rocks, to the grid-blocks adjacent to faults in production simulations. The multi-phase flow properties (e.g. relative permeability and capillary pressure) of fault rocks are not considered because special core analysis has never previously been conducted on fault rock samples. Here, we partially fill this knowledge gap by presenting data from the first experiments that have measured the gas relative permeability (k{sub rg}) of cataclastic fault rocks. The cataclastic faults were collected from an outcrop of Permo-Triassic sandstone in the Moray Firth, Scotland; the fault rocks are similar to those found within Rotliegend gas reservoirs in the UK southern North Sea. The relative permeability measurements were made using a gas pulse-decay technique on samples whose water saturation was varied using vapour chambers. The measurements indicate that if the same fault rocks were present in gas reservoirs from the southern Permian Basin they would have k{sub rg} values of <0.02. Failure to take into account relative permeability effects could therefore lead to an overestimation of the transmissibility of faults within gas reservoirs by several orders of magnitude. Incorporation of these new results into a simplified production simulation model can explain the pressure evolution from a compartmentalised Rotliegend gas reservoir from the southern North Sea, offshore Netherlands, which could not easily be explained using only single-phase permeability data from fault rocks. (author)

  1. Synthetic modeling of a fluid injection-induced fault rupture with slip-rate dependent friction coefficient

    Science.gov (United States)

    Urpi, Luca; Rinaldi, Antonio Pio; Rutqvist, Jonny; Cappa, Frédéric; Spiers, Christopher J.

    2016-04-01

    Poro-elastic stress and effective stress reduction associated with deep underground fluid injection can potentially trigger shear rupture along pre-existing faults. We modeled an idealized CO2 injection scenario, to assess the effects on faults of the first phase of a generic CO2 aquifer storage operation. We used coupled multiphase fluid flow and geomechanical numerical modeling to evaluate the stress and pressure perturbations induced by fluid injection and the response of a nearby normal fault. Slip-rate dependent friction and inertial effects have been aken into account during rupture. Contact elements have been used to take into account the frictional behavior of the rupture plane. We investigated different scenarios of injection rate to induce rupture on the fault, employing various fault rheologies. Published laboratory data on CO2-saturated intact and crushed rock samples, representative of a potential target aquifer, sealing formation and fault gouge, have been used to define a scenario where different fault rheologies apply at different depths. Nucleation of fault rupture takes place at the bottom of the reservoir, in agreement with analytical poro-elastic stress calculations, considering injection-induced reservoir inflation and the tectonic scenario. For the stress state here considered, the first triggered rupture always produces the largest rupture length and slip magnitude, correlated with the fault rheology. Velocity weakening produces larger ruptures and generates larger magnitude seismic events. Heterogeneous faults have been considered including velocity-weakening or velocity strengthening sections inside and below the aquifer, while upper sections being velocity-neutral. Nucleation of rupture in a velocity strengthening section results in a limited rupture extension, both in terms of maximum slip and rupture length. For a heterogeneous fault with nucleation in a velocity-weakening section, the rupture may propagate into the overlying velocity

  2. Bounding Ground Motions for Hayward Fault Scenario Earthquakes Using Suites of Stochastic Rupture Models

    Science.gov (United States)

    Rodgers, A. J.; Xie, X.; Petersson, A.

    2007-12-01

    The next major earthquake in the San Francisco Bay area is likely to occur on the Hayward-Rodgers Creek Fault system. Attention on the southern Hayward section is appropriate given the upcoming 140th anniversary of the 1868 M 7 rupture coinciding with the estimated recurrence interval. This presentation will describe ground motion simulations for large (M > 6.5) earthquakes on the Hayward Fault using a recently developed elastic finite difference code and high-performance computers at Lawrence Livermore National Laboratory. Our code easily reads the recent USGS 3D seismic velocity model of the Bay Area developed in 2005 and used for simulations of the 1906 San Francisco and 1989 Loma Prieta earthquakes. Previous work has shown that the USGS model performs very well when used to model intermediate period (4-33 seconds) ground motions from moderate (M ~ 4-5) earthquakes (Rodgers et al., 2008). Ground motions for large earthquakes are strongly controlled by the hypocenter location, spatial distribution of slip, rise time and directivity effects. These are factors that are impossible to predict in advance of a large earthquake and lead to large epistemic uncertainties in ground motion estimates for scenario earthquakes. To bound this uncertainty, we are performing suites of simulations of scenario events on the Hayward Fault using stochastic rupture models following the method of Liu et al. (Bull. Seism. Soc. Am., 96, 2118-2130, 2006). These rupture models have spatially variable slip, rupture velocity, rise time and rake constrained by characterization of inferred finite fault ruptures and expert opinion. Computed ground motions show variability due to the variability in rupture models and can be used to estimate the average and spread of ground motion measures at any particular site. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No.W-7405-Eng-48. This is

  3. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed....... The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool the MFM Suite. MFM applications in nuclear power systems are described by two examples a PWR and a FBRreactor. The PWR example show how MFM can be used to model and reason about...

  4. An Analytical Model for Assessing Stability of Pre-Existing Faults in Caprock Caused by Fluid Injection and Extraction in a Reservoir

    Science.gov (United States)

    Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin

    2016-07-01

    Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.

  5. Model-Based Fault Detection and Isolation of a Liquid-Cooled Frequency Converter on a Wind Turbine

    Directory of Open Access Journals (Sweden)

    Peng Li

    2012-01-01

    Full Text Available With the rapid development of wind energy technologies and growth of installed wind turbine capacity in the world, the reliability of the wind turbine becomes an important issue for wind turbine manufactures, owners, and operators. The reliability of the wind turbine can be improved by implementing advanced fault detection and isolation schemes. In this paper, an observer-based fault detection and isolation method for the cooling system in a liquid-cooled frequency converter on a wind turbine which is built up in a scalar version in the laboratory is presented. A dynamic model of the scale cooling system is derived based on energy balance equation. A fault analysis is conducted to determine the severity and occurrence rate of possible component faults and their end effects in the cooling system. A method using unknown input observer is developed in order to detect and isolate the faults based on the developed dynamical model. The designed fault detection and isolation algorithm is applied on a set of measured experiment data in which different faults are artificially introduced to the scaled cooling system. The experimental results conclude that the different faults are successfully detected and isolated.

  6. Modeling of stress-triggered faulting and displacement magnitude along Agenor Linea, Europa

    Science.gov (United States)

    Nahm, A.; Cameron, M. E.; Smith-Konter, B. R.; Pappalardo, R. T.

    2012-12-01

    We investigate the relationship between shear and normal stresses at Agenor Linea (AL) to better understand the role of tidal stress sources and implications for faulting on Europa. AL is a ~1500 km long, E-W trending, 20-30 km wide zone of geologically young deformation located in the southern hemisphere, and it forks into two branches at its eastern end. Based on photogeological evidence and stress orientation predictions, AL is primarily a right-lateral strike slip fault and may have accommodated up to 20 km of right-lateral slip. We compute tidal shear and normal stresses along present-day AL using SatStress, a numerical code that calculates tidal stresses at any point on the surface of a satellite for both diurnal and non-synchronous rotation (NSR) stresses. We adopt model parameters appropriate for Europa with a spherically symmetric, 20 km thick ice shell underlain by a global subsurface ocean and assume a coefficient of friction μ = 0.6. Along AL, shear stresses are primarily right-lateral (~1.8 MPa), while normal stresses are predominantly compressive along the west side of the structure (~0.7 MPa) and tensile along the east side (~2.9 MPa). Failure along AL is assessed using the Coulomb failure criterion, which states that shear failure occurs when the shear stress exceeds the frictional resistance of the fault. Where fault segments meet these conditions for shear failure, coseismic displacements are determined (assuming complete stress drop). We calculate shallow displacements as large as ~50 m at 1 km depth and ~10 m at 3 km depth. Triggered stresses from coseismic fault slip may also contribute to the total slip. We investigate the role of stress triggering by computing the change in Coulomb failure stress (ΔCFS) along AL. Where slip has occurred, negative ΔCFS is calculated; positive ΔCFS values indicate segments where failure is promoted. Positive ΔCFS is calculated at the western tip and the intersection of the branches with the main fault at a

  7. A Ship Propulsion System Model for Fault-tolerant Control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    . The propulsion system model is presented in two versions: the first one consists of one engine and one propeller, and the othe one consists of two engines and their corresponding propellers placed in parallel in the ship. The corresponding programs are developed and are available....

  8. Implementation of a Fractional Model-Based Fault Detection Algorithm into a PLC Controller

    Science.gov (United States)

    Kopka, Ryszard

    2014-12-01

    This paper presents results related to the implementation of model-based fault detection and diagnosis procedures into a typical PLC controller. To construct the mathematical model and to implement the PID regulator, a non-integer order differential/integral calculation was used. Such an approach allows for more exact control of the process and more precise modelling. This is very crucial in model-based diagnostic methods. The theoretical results were verified on a real object in the form of a supercapacitor connected to a PLC controller by a dedicated electronic circuit controlled directly from the PLC outputs.

  9. The study of hybrid model identification,computation analysis and fault location for nonlinear dynamic circuits and systems

    Institute of Scientific and Technical Information of China (English)

    XIE Hong; HE Yi-gang; ZENG Guan-da

    2006-01-01

    This paper presents the hybrid model identification for a class of nonlinear circuits and systems via a combination of the block-pulse function transform with the Volterra series.After discussing the method to establish the hybrid model and introducing the hybrid model identification,a set of relative formulas are derived for calculating the hybrid model and computing the Volterra series solution of nonlinear dynamic circuits and systems.In order to significantly reduce the computation cost for fault location,the paper presents a new fault diagnosis method based on multiple preset models that can be realized online.An example of identification simulation and fault diagnosis are given.Results show that the method has high accuracy and efficiency for fault location of nonlinear dynamic circuits and systems.

  10. Model-Based Fault Detection and Isolation of a Liquid-Cooled Frequency Converter on a Wind Turbine

    DEFF Research Database (Denmark)

    Li, Peng; Odgaard, Peter Fogh; Stoustrup, Jakob

    2012-01-01

    system is derived based on energy balance equation. A fault analysis is conducted to determine the severity and occurrence rate of possible component faults and their end effects in the cooling system. A method using unknown input observer is developed in order to detect and isolate the faults based......With the rapid development of wind energy technologies and growth of installed wind turbine capacity in the world, the reliability of the wind turbine becomes an important issue for wind turbine manufactures, owners, and operators. The reliability of the wind turbine can be improved by implementing...... advanced fault detection and isolation schemes. In this paper, an observer-based fault detection and isolation method for the cooling system in a liquid-cooled frequency converter on a wind turbine which is built up in a scalar version in the laboratory is presented. A dynamic model of the scale cooling...

  11. Fault diagnostics in power transformer model winding for different alpha values

    Directory of Open Access Journals (Sweden)

    G.H. Kusumadevi

    2015-09-01

    Full Text Available Transient overvoltages appearing at line terminal of power transformer HV windings can cause failure of winding insulation. The failure can be from winding to ground or between turns or sections of winding. In most of the cases, failure from winding to ground can be detected by changes in the wave shape of surge voltage appearing at line terminal. However, detection of insulation failure between turns may be difficult due to intricacies involved in identifications of faults. In this paper, simulation investigations carried out on a power transformer model winding for identifying faults between turns of winding has been reported. The power transformer HV winding has been represented by 8 sections, 16 sections and 24 sections. Neutral current waveform has been analyzed for same model winding represented by different number of sections. The values of α (‘α’ value is the square root of total ground capacitance to total series capacitance of winding considered for windings are 5, 10 and 20. Standard lightning impulse voltage (1.2/50 μs wave shape have been considered for analysis. Computer simulations have been carried out using software PSPICE version 10.0. Neutral current and frequency response analysis methods have been used for identification of faults within sections of transformer model winding.

  12. An Integrated Approach of Model checking and Temporal Fault Tree for System Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2009-10-15

    Digitalization of instruments and control systems in nuclear power plants offers the potential to improve plant safety and reliability through features such as increased hardware reliability and stability, and improved failure detection capability. It however makes the systems and their safety analysis more complex. Originally, safety analysis was applied to hardware system components and formal methods mainly to software. For software-controlled or digitalized systems, it is necessary to integrate both. Fault tree analysis (FTA) which has been one of the most widely used safety analysis technique in nuclear industry suffers from several drawbacks as described in. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA.

  13. A Hamiltonian Approach to Fault Isolation in a Planar Vertical Take–Off and Landing Aircraft Model

    Directory of Open Access Journals (Sweden)

    Rodriguez-Alfaro Luis H.

    2015-03-01

    Full Text Available The problem of fault detection and isolation in a class of nonlinear systems having a Hamiltonian representation is considered. In particular, a model of a planar vertical take-off and landing aircraft with sensor and actuator faults is studied. A Hamiltonian representation is derived from an Euler-Lagrange representation of the system model considered. In this form, nonlinear decoupling is applied in order to obtain subsystems with (as much as possible specific fault sensitivity properties. The resulting decoupled subsystem is represented as a Hamiltonian system and observer-based residual generators are designed. The results are presented through simulations to show the effectiveness of the proposed approach.

  14. Using the 3D active fault model to estimate the surface deformation, a study on HsinChu area, Taiwan.

    Science.gov (United States)

    Lin, Y. K.; Ke, M. C.; Ke, S. S.

    2016-12-01

    An active fault is commonly considered to be active if they have moved one or more times in the last 10,000 years and likely to have another earthquake sometime in the future. The relationship between the fault reactivation and the surface deformation after the Chi-Chi earthquake (M=7.2) in 1999 has been concerned up to now. According to the investigations of well-known disastrous earthquakes in recent years, indicated that surface deformation is controlled by the 3D fault geometric shape. Because the surface deformation may cause dangerous damage to critical infrastructures, buildings, roads, power, water and gas lines etc. Therefore it's very important to make pre-disaster risk assessment via the 3D active fault model to decrease serious economic losses, people injuries and deaths caused by large earthquake. The approaches to build up the 3D active fault model can be categorized as (1) field investigation (2) digitized profile data and (3) build the 3D modeling. In this research, we tracked the location of the fault scarp in the field first, then combined the seismic profiles (had been balanced) and historical earthquake data to build the underground fault plane model by using SKUA-GOCAD program. Finally compared the results come from trishear model (written by Richard W. Allmendinger, 2012) and PFC-3D program (Itasca) and got the calculated range of the deformation area. By analysis of the surface deformation area made from Hsin-Chu Fault, we concluded the result the damage zone is approaching 68 286m, the magnitude is 6.43, the offset is 0.6m. base on that to estimate the population casualties, building damage by the M=6.43 earthquake in Hsin-Chu area, Taiwan. In the future, in order to be applied accurately on earthquake disaster prevention, we need to consider further the groundwater effect and the soil structure interaction inducing by faulting.

  15. The Derivation of Fault Volumetric Properties from 3D Trace Maps Using Outcrop Constrained Discrete Fracture Network Models

    Science.gov (United States)

    Hodgetts, David; Seers, Thomas

    2015-04-01

    Fault systems are important structural elements within many petroleum reservoirs, acting as potential conduits, baffles or barriers to hydrocarbon migration. Large, seismic-scale faults often serve as reservoir bounding seals, forming structural traps which have proved to be prolific plays in many petroleum provinces. Though inconspicuous within most seismic datasets, smaller subsidiary faults, commonly within the damage zones of parent structures, may also play an important role. These smaller faults typically form narrow, tabular low permeability zones which serve to compartmentalize the reservoir, negatively impacting upon hydrocarbon recovery. Though considerable improvements have been made in the visualization field to reservoir-scale fault systems with the advent of 3D seismic surveys, the occlusion of smaller scale faults in such datasets is a source of significant uncertainty during prospect evaluation. The limited capacity of conventional subsurface datasets to probe the spatial distribution of these smaller scale faults has given rise to a large number of outcrop based studies, allowing their intensity, connectivity and size distributions to be explored in detail. Whilst these studies have yielded an improved theoretical understanding of the style and distribution of sub-seismic scale faults, the ability to transform observations from outcrop to quantities that are relatable to reservoir volumes remains elusive. These issues arise from the fact that outcrops essentially offer a pseudo-3D window into the rock volume, making the extrapolation of surficial fault properties such as areal density (fracture length per unit area: P21), to equivalent volumetric measures (i.e. fracture area per unit volume: P32) applicable to fracture modelling extremely challenging. Here, we demonstrate an approach which harnesses advances in the extraction of 3D trace maps from surface reconstructions using calibrated image sequences, in combination with a novel semi

  16. Modeling the evolution of the lower crust with laboratory derived rheological laws under an intraplate strike slip fault

    Science.gov (United States)

    Zhang, X.; Sagiya, T.

    2015-12-01

    The earth's crust can be divided into the brittle upper crust and the ductile lower crust based on the deformation mechanism. Observations shows heterogeneities in the lower crust are associated with fault zones. One of the candidate mechanisms of strain concentration is shear heating in the lower crust, which is considered by theoretical studies for interplate faults [e.g. Thatcher & England 1998, Takeuchi & Fialko 2012]. On the other hand, almost no studies has been done for intraplate faults, which are generally much immature than interplate faults and characterized by their finite lengths and slow displacement rates. To understand the structural characteristics in the lower crust and its temporal evolution in a geological time scale, we conduct a 2-D numerical experiment on the intraplate strike slip fault. The lower crust is modeled as a 20km thick viscous layer overlain by rigid upper crust that has a steady relative motion across a vertical strike slip fault. Strain rate in the lower crust is assumed to be a sum of dislocation creep and diffusion creep components, each of which flows the experimental flow laws. The geothermal gradient is assumed to be 25K/km. We have tested different total velocity on the model. For intraplate fault, the total velocity is less than 1mm/yr, and for comparison, we use 30mm/yr for interplate faults. Results show that at a low slip rate condition, dislocation creep dominates in the shear zone near the intraplate fault's deeper extension while diffusion creep dominates outside the shear zone. This result is different from the case of interplate faults, where dislocation creep dominates the whole region. Because of the power law effect of dislocation creep, the effective viscosity in the shear zone under intraplate faults is much higher than that under the interplate fault, therefore, shear zone under intraplate faults will have a much higher viscosity and lower shear stress than the intraplate fault. Viscosity contract between

  17. Nucleation process of magnitude 2 repeating earthquakes on the San Andreas Fault predicted by rate-and-state fault models with SAFOD drill core data

    Science.gov (United States)

    Kaneko, Yoshihiro; Carpenter, Brett M.; Nielsen, Stefan B.

    2017-01-01

    Recent laboratory shear-slip experiments conducted on a nominally flat frictional interface reported the intriguing details of a two-phase nucleation of stick-slip motion that precedes the dynamic rupture propagation. This behavior was subsequently reproduced by a physics-based model incorporating laboratory-derived rate-and-state friction laws. However, applying the laboratory and theoretical results to the nucleation of crustal earthquakes remains challenging due to poorly constrained physical and friction properties of fault zone rocks at seismogenic depths. Here we apply the same physics-based model to simulate the nucleation process of crustal earthquakes using unique data acquired during the San Andreas Fault Observatory at Depth (SAFOD) experiment and new and existing measurements of friction properties of SAFOD drill core samples. Using this well-constrained model, we predict what the nucleation phase will look like for magnitude ˜2 repeating earthquakes on segments of the San Andreas Fault at a 2.8 km depth. We find that despite up to 3 orders of magnitude difference in the physical and friction parameters and stress conditions, the behavior of the modeled nucleation is qualitatively similar to that of laboratory earthquakes, with the nucleation consisting of two distinct phases. Our results further suggest that precursory slow slip associated with the earthquake nucleation phase may be observable in the hours before the occurrence of the magnitude ˜2 earthquakes by strain measurements close (a few hundred meters) to the hypocenter, in a position reached by the existing borehole.

  18. Undecimated Lifting Wavelet Packet Transform with Boundary Treatment for Machinery Incipient Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Lixiang Duan

    2016-01-01

    Full Text Available Effective signal processing in fault detection and diagnosis (FDD is an important measure to prevent failure and accidents of machinery. To address the end distortion and frequency aliasing issues in conventional lifting wavelet transform, a Volterra series assisted undecimated lifting wavelet packet transform (ULWPT is investigated for machinery incipient fault diagnosis. Undecimated lifting wavelet packet transform is firstly formulated to eliminate the frequency aliasing issue in traditional lifting wavelet packet transform. Next, Volterra series, as a boundary treatment method, is used to preprocess the signal to suppress the end distortion in undecimated lifting wavelet packet transform. Finally, the decomposed wavelet coefficients are trimmed to the original length as the signal of interest for machinery incipient fault detection. Experimental study on a reciprocating compressor is performed to demonstrate the effectiveness of the presented method. The results show that the presented method outperforms the conventional approach by dramatically enhancing the weak defect feature extraction for reciprocating compressor valve fault diagnosis.

  19. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  20. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  1. Two methods for modeling vibrations of planetary gearboxes including faults: Comparison and validation

    Science.gov (United States)

    Parra, J.; Vicuña, Cristián Molina

    2017-08-01

    Planetary gearboxes are important components of many industrial applications. Vibration analysis can increase their lifetime and prevent expensive repair and safety concerns. However, an effective analysis is only possible if the vibration features of planetary gearboxes are properly understood. In this paper, models are used to study the frequency content of planetary gearbox vibrations under non-fault and different fault conditions. Two different models are considered: phenomenological model, which is an analytical-mathematical formulation based on observation, and lumped-parameter model, which is based on the solution of the equations of motion of the system. Results of both models are not directly comparable, because the phenomenological model provides the vibration on a fixed radial direction, such as the measurements of the vibration sensor mounted on the outer part of the ring gear. On the other hand, the lumped-parameter model provides the vibrations on the basis of a rotating reference frame fixed to the carrier. To overcome this situation, a function to decompose the lumped-parameter model solutions to a fixed reference frame is presented. Finally, comparisons of results from both model perspectives and experimental measurements are presented.

  2. A Parallel Decision Model Based on Support Vector Machines and Its Application to Fault Diagnosis

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu(阎威武); Shao Huihe

    2004-01-01

    Many industrial process systems are becoming more and more complex and are characterized by distributed features. To ensure such a system to operate under working order, distributed parameter values are often inspected from subsystems or different points in order to judge working conditions of the system and make global decisions. In this paper, a parallel decision model based on Support Vector Machine (PDMSVM) is introduced and applied to the distributed fault diagnosis in industrial process. PDMSVM is convenient for information fusion of distributed system and it performs well in fault diagnosis with distributed features. PDMSVM makes decision based on synthetic information of subsystems and takes the advantage of Support Vector Machine. Therefore decisions made by PDMSVM are highly reliable and accurate.

  3. FAULT DIAGNOSIS APPROACH BASED ON HIDDEN MARKOV MODEL AND SUPPORT VECTOR MACHINE

    Institute of Scientific and Technical Information of China (English)

    LIU Guanjun; LIU Xinmin; QIU Jing; HU Niaoqing

    2007-01-01

    Aiming at solving the problems of machine-learning in fault diagnosis, a diagnosis approach is proposed based on hidden Markov model (HMM) and support vector machine (SVM). HMM usually describes intra-class measure well and is good at dealing with continuous dynamic signals. SVM expresses inter-class difference effectively and has perfect classify ability. This approach is built on the merit of HMM and SVM. Then, the experiment is made in the transmission system of a helicopter. With the features extracted from vibration signals in gearbox, this HMM-SVM based diagnostic approach is trained and used to monitor and diagnose the gearbox's faults. The result shows that this method is better than HMM-based and SVM-based diagnosing methods in higher diagnostic accuracy with small training samples.

  4. Model based, detailed fault analysis in the CERN PS complex equipment

    CERN Document Server

    Beharrell, M; Bouché, J M; Cupérus, J; Lelaizant, M; Mérard, L

    1995-01-01

    In the CERN PS Complex of accelerators, about a thousand of equipment of various type (power converters, RF cavities, beam measurement devices, vacuum systems etc...) are controlled using the so-called Control Protocol, already described in previous Conferences. This Protocol, a model based equipment access standard, provides, amongst other facilities, a uniform and structured fault description and report feature. The faults are organized in categories, following their gravity, and are presented at two levels: the first level is global and identical for all devices, the second level is very detailed and adapted to the peculiarities of each single device. All the relevant information is provided by the equipment specialists and is appropriately stored in static and real time data bases; in this way a unique set of data driven application programs can always cope with existing and newly added equipment. Two classes of applications have been implemented, the first one is intended for control room alarm purposes,...

  5. Feature Extraction Method of Rolling Bearing Fault Signal Based on EEMD and Cloud Model Characteristic Entropy

    Directory of Open Access Journals (Sweden)

    Long Han

    2015-09-01

    Full Text Available The randomness and fuzziness that exist in rolling bearings when faults occur result in uncertainty in acquisition signals and reduce the accuracy of signal feature extraction. To solve this problem, this study proposes a new method in which cloud model characteristic entropy (CMCE is set as the signal characteristic eigenvalue. This approach can overcome the disadvantages of traditional entropy complexity in parameter selection when solving uncertainty problems. First, the acoustic emission signals under normal and damage rolling bearing states collected from the experiments are decomposed via ensemble empirical mode decomposition. The mutual information method is then used to select the sensitive intrinsic mode functions that can reflect signal characteristics to reconstruct the signal and eliminate noise interference. Subsequently, CMCE is set as the eigenvalue of the reconstructed signal. Finally, through the comparison of experiments between sample entropy, root mean square and CMCE, the results show that CMCE can better represent the characteristic information of the fault signal.

  6. A physical model for aftershocks triggered by dislocation on a rectangular fault

    CERN Document Server

    Console, R

    2005-01-01

    We find the static displacement, stress, strain and the modified Columb failure stress produced in an elastic medium by a finite size rectangular fault after its dislocation with uniform stress drop but a non uniform dislocation on the source. The time-dependent rate of triggered earthquakes is estimated by a rate-state model applied to a uniformly distributed population of faults whose equilibrium is perturbated by a stress change caused only by the first dislocation. The rate of triggered events in our simulations is exponentially proportional to the stress change, but the time at which the maximum rate begins to decrease is variable from fractions of hour for positive stress changes of the order of some MPa, up to more than a year for smaller stress changes. As a consequence, the final number of triggered events is proportional to the stress change. The model predicts that the total number of events triggered on a plane containing the fault is proportional to the 2/3 power of the seismic moment. Indeed, th...

  7. Fault Tree Model for Failure Path Prediction of Bolted Steel Tension Member in a Structural System

    Directory of Open Access Journals (Sweden)

    Biswajit Som

    2015-06-01

    Full Text Available Fault tree is a graphical representation of various sequential combinations of events which leads to the failure of any system, such as a structural system. In this paper it is shown that a fault tree model is also applicable to a critical element of a complex structural system. This will help to identify the different failure mode of a particular structural element which might eventually triggered a progressive collapse of the whole structural system. Non-redundant tension member generally regarded as a Fracture Critical Member (FCM in a complex structural system, especially in bridge, failure of which may lead to immediate collapse of the structure. Limit state design is governed by the failure behavior of a structural element at its ultimate state. Globally, condition assessment of an existing structural system, particularly for bridges, Fracture Critical Inspection becomes very effective and mandatory in some countries. Fault tree model of tension member, presented in this paper can be conveniently used to identify the flaws in FCM if any, in an existing structural system and also as a check list for new design of tension member.

  8. Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection

    Science.gov (United States)

    Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang

    2017-07-01

    It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the

  9. The 2016 central Italy earthquake sequence: surface effects, fault model and triggering scenarios

    Science.gov (United States)

    Chatzipetros, Alexandros; Pavlides, Spyros; Papathanassiou, George; Sboras, Sotiris; Valkaniotis, Sotiris; Georgiadis, George

    2017-04-01

    The results of fieldwork performed during the 2016 earthquake sequence around the karstic basins of Norcia and La Piana di Castelluccio, at an altitude of 1400 m, on the Monte Vettore (altitude 2476 m) and Vettoretto, as well as the three mapped seismogenic faults, striking NNW-SSW, are presented in this paper. Surface co-seismic ruptures were observed in the Vettore and Vettoretto segment of the fault for several kilometres ( 7 km) in the August earthquakes at high altitudes, and were re-activated and expanded northwards during the October earthquakes. Coseismic ruptures and the neotectonic Mt. Vettore fault zone were modelled in detail using images acquired from specifically planned UAV (drone) flights. Ruptures, typically with displacement of up to 20 cm, were observed after the August event both in the scree and weathered mantle (elluvium), as well as the bedrock, consisting mainly of fragmented carbonate rocks with small tectonic surfaces. These fractures expanded and new ones formed during the October events, typically of displacements of up to 50 cm, although locally higher displacements of up to almost 2 m were observed. Hundreds of rock falls and landslides were mapped through satellite imagery, using pre- and post- earthquake Sentinel 2A images. Several of them were also verified in the field. Based on field mapping results and seismological information, the causative faults were modelled. The model consists of five seismogenic sources, each one associated with a strong event in the sequence. The visualisation of the seismogenic sources follows INGV's DISS standards for the Individual Seismogenic Sources (ISS) layer, while strike, dip and rake of the seismic sources are obtained from selected focal mechanisms. Based on this model, the ground deformation pattern was inferred, using Okada's dislocation solution formulae, which shows that the maximum calculated vertical displacement is 0.53 m. This is in good agreement with the statistical analysis of the

  10. Using a coupled hydro-mechanical fault model to better understand the risk of induced seismicity in deep geothermal projects

    Science.gov (United States)

    Abe, Steffen; Krieger, Lars; Deckert, Hagen

    2017-04-01

    The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of

  11. Interaction of small repeating earthquakes in a rate and state fault model

    Science.gov (United States)

    Lapusta, N.; Chen, T.

    2010-12-01

    Small repeating earthquake sequences can be located very close, for example, the San Andreas Fault Observatory at Depth (SAFOD) target cluster repeaters "San Francisco" and "Los Angeles" are separated by only about 50 m. These two repeating sequences also show closeness in occurrence time, indicating substantial interaction. Modeling of the interaction of repeating sequences and comparing the modeling results with observations would help us understand the physics of fault slip. Here we conduct numerical simulations of two asperities in a rate and state fault model (Chen and Lapusta, JGR, 2009), with asperities being rate weakening and the rest of the fault being rate-strengthening. One of our goals is to create a model for the observed interaction between "San Francisco" and "Los Angeles" clusters. The study of Chen and Lapusta (JGR, 2009) and Chen et al (accepted by EPSL, 2010) showed that this approach can reproduce behavior of isolated repeating earthquake sequences, in particular, the scaling of their moment versus recurrence time and the response to accelerated postseismic creep. In this work, we investigate the effect of distance between asperities and asperity size on the interaction, in terms of occurrence time, seismic moment and rupture pattern. The fault is governed by the aging version of rate-and-state friction. To account for relatively high stress drops inferred seismically for Parkfield SAFOD target earthquakes (Dreger et al, 2007), we also conduct simulations that include enhanced dynamic weakening during seismic events. As expected based on prior studies (e.g., Kato, JGR, 2004; Kaneko et al., Nature Geoscience, 2010), the two asperities act like one asperity if they are close enough, and they behave like isolated asperities when they are sufficiently separated. Motivated by the SAFOD target repeaters that rupture separately but show evidence of interaction, we concentrate on the intermediate distance between asperities. In that regime, the

  12. Numerical modeling of fracking fluid and methane migration through fault zones in shale gas reservoirs

    Science.gov (United States)

    Taherdangkoo, Reza; Tatomir, Alexandru; Sauter, Martin

    2017-04-01

    Hydraulic fracturing operation in shale gas reservoir has gained growing interest over the last few years. Groundwater contamination is one of the most important environmental concerns that have emerged surrounding shale gas development (Reagan et al., 2015). The potential impacts of hydraulic fracturing could be studied through the possible pathways for subsurface migration of contaminants towards overlying aquifers (Kissinger et al., 2013; Myers, 2012). The intent of this study is to investigate, by means of numerical simulation, two failure scenarios which are based on the presence of a fault zone that penetrates the full thickness of overburden and connect shale gas reservoir to aquifer. Scenario 1 addresses the potential transport of fracturing fluid from the shale into the subsurface. This scenario was modeled with COMSOL Multiphysics software. Scenario 2 deals with the leakage of methane from the reservoir into the overburden. The numerical modeling of this scenario was implemented in DuMux (free and open-source software), discrete fracture model (DFM) simulator (Tatomir, 2012). The modeling results are used to evaluate the influence of several important parameters (reservoir pressure, aquifer-reservoir separation thickness, fault zone inclination, porosity, permeability, etc.) that could affect the fluid transport through the fault zone. Furthermore, we determined the main transport mechanisms and circumstances in which would allow frack fluid or methane migrate through the fault zone into geological layers. The results show that presence of a conductive fault could reduce the contaminant travel time and a significant contaminant leakage, under certain hydraulic conditions, is most likely to occur. Bibliography Kissinger, A., Helmig, R., Ebigbo, A., Class, H., Lange, T., Sauter, M., Heitfeld, M., Klünker, J., Jahnke, W., 2013. Hydraulic fracturing in unconventional gas reservoirs: risks in the geological system, part 2. Environ Earth Sci 70, 3855

  13. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    Science.gov (United States)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  14. Considering the Fault Dependency Concept with Debugging Time Lag in Software Reliability Growth Modeling Using a Power Function of Testing Time

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products. A number of SRGMs have been proposed in the literature to represent time-dependent fault identification / removal phenomenon; still new models are being proposed that could fit a greater number of reliability growth curves. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of the personnel, the size of the debugging team, the technique, and so on. Thus, the detected fault need not be immediately removed, and it may lag the fault detection process by a delay effect factor. In this paper, we first review how different software reliability growth models have been developed, where fault detection process is dependent not only on the number of residual fault content but also on the testing time, and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor. Based on the power function of the testing time concept, we propose four new SRGMs that assume the presence of two types of faults in the software: leading and dependent faults. Leading faults are those that can be removed upon a failure being observed. However, dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag. These models have been tested on real software error data to show its goodness of fit, predictive validity and applicability.

  15. Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).

    Science.gov (United States)

    Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.

    2017-04-01

    The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.

  16. Heterogeneous slip and rupture models of the San Andreas fault zone based upon three-dimensional earthquake tomography

    Energy Technology Data Exchange (ETDEWEB)

    Foxall, William [Univ. of California, Berkeley, CA (United States)

    1992-11-01

    Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.

  17. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    Science.gov (United States)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  18. Thrust fault modeling and Late-Noachian lithospheric structure of the circum-Hellas region, Mars

    Science.gov (United States)

    Egea-Gonzalez, Isabel; Jiménez-Díaz, Alberto; Parro, Laura M.; López, Valle; Williams, Jean-Pierre; Ruiz, Javier

    2017-05-01

    The circum-Hellas area of Mars borders Hellas Planitia, a giant impact ∼4.0-4.2 Ga old making the deepest and broadest depression on Mars, and is characterized by a complex pattern of fracture sets, lobate scarps, grabens, and volcanic plains. The numerous lobate scarps in the circum-Hellas region mainly formed in the Late Noachian and, except Amenthes Rupes, have been scarcely studied. In this work, we study the mechanical behavior and thermal structure of the crust in the circum-Hellas region at the time of lobate scarp formation, through the modeling of the depth of faulting beneath several prominent lobate scarps. We obtain faulting depths between ∼13 and 38 km, depending on the lobate scarp and accounting for uncertainty. These results indicate low surface and mantle heat flows in Noachian to Early Hesperian times, in agreement with heat flow estimates derived from lithospheric strength for several regions of similar age on Mars. Also, faulting depth and associate heat flows are not dependent of the local crustal thickness, which supports a stratified crust in the circum-Hellas region, with heat-producing elements concentrated in an upper layer that is thinner than the whole crust.

  19. Numerical modeling of fracking fluid migration through fault zones and fractures in the North German Basin

    Science.gov (United States)

    Pfunt, Helena; Houben, Georg; Himmelsbach, Thomas

    2016-09-01

    Gas production from shale formations by hydraulic fracturing has raised concerns about the effects on the quality of fresh groundwater. The migration of injected fracking fluids towards the surface was investigated in the North German Basin, based on the known standard lithology. This included cases with natural preferential pathways such as permeable fault zones and fracture networks. Conservative assumptions were applied in the simulation of flow and mass transport triggered by a high pressure boundary of up to 50 MPa excess pressure. The results show no significant fluid migration for a case with undisturbed cap rocks and a maximum of 41 m vertical transport within a permeable fault zone during the pressurization. Open fractures, if present, strongly control the flow field and migration; here vertical transport of fracking fluids reaches up to 200 m during hydraulic fracturing simulation. Long-term transport of the injected water was simulated for 300 years. The fracking fluid rises vertically within the fault zone up to 485 m due to buoyancy. Progressively, it is transported horizontally into sandstone layers, following the natural groundwater flow direction. In the long-term, the injected fluids are diluted to minor concentrations. Despite the presence of permeable pathways, the injected fracking fluids in the reported model did not reach near-surface aquifers, either during the hydraulic fracturing or in the long term. Therefore, the probability of impacts on shallow groundwater by the rise of fracking fluids from a deep shale-gas formation through the geological underground to the surface is small.

  20. Late Quaternary sinistral slip rate along the Altyn Tagh fault and its structural transformation model

    Institute of Scientific and Technical Information of China (English)

    XU; Xiwei; P.; Tapponnier; J.; Van; Der; Woerd; F.; J.; Ryer

    2005-01-01

    Based on technical processing of high-resolution SPOT images and aerophotos,detailed mapping of offset landforms in combination with field examination and displacement measurement, and dating of offset geomorphic surfaces by using carbon fourteen (14C), cosmogenic nuclides (10Be+26Al) and thermoluminescence (TL) methods, the Holocene sinistral slip rates on different segments of the Altyn Tagh Fault (ATF) are obtained. The slip rates reach 17.5±2 mm/a on the central and western segments west of Aksay Town, 11±3.5 mm/a on the Subei-Shibaocheng segment, 4.8± 1.0 mm/a on the Sulehe segment and only 2.2± 0.2 mm/a on the Kuantanshan segment, an easternmost segment of the ATF. The sudden change points for loss of sinistral slip rates are located at the Subei, Shibaocheng and Shulehe triple junctions where NW-trending active thrust faults splay from the ATF and propagate southeastward. Slip vector analyses indicate that the loss of the sinistral slip rates from west to east across a triple junction has structurally transformed into local crustal shortening perpendicular to the active thrust faults and strong uplifting of the thrust sheets to form the NW-trending Danghe Nanshan,Daxueshan and Qilianshan Ranges. Therefore, the eastward extrusion of the northern Qinghai-Tibetan Plateau is limited and this is in accord with "the imbricated thrusting transformation-limited extrusion model".

  1. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    Science.gov (United States)

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  2. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    Directory of Open Access Journals (Sweden)

    Weiying Wang

    2014-01-01

    Full Text Available Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  3. Blind identification of threshold auto-regressive model for machine fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Zhinong; HE Yongyong; CHU Fulei; WU Zhaotong

    2007-01-01

    A blind identification method was developed for the threshold auto-regressive (TAR) model. The method had good identification accuracy and rapid convergence, especially for higher order systems. The proposed method was then combined with the hidden Markov model (HMM) to determine the auto-regressive (AR) coefficients for each interval used for feature extraction, with the HMM as a classifier. The fault diagnoses during the speed-up and speed- down processes for rotating machinery have been success- fully completed. The result of the experiment shows that the proposed method is practical and effective.

  4. Examining the Evolution of the Peninsula Segment of the San Andreas Fault, Northern California, Using a 4-D Geologic Model

    Science.gov (United States)

    Horsman, E.; Graymer, R. W.; McLaughlin, R. J.; Jachens, R. C.; Scheirer, D. S.

    2008-12-01

    Retrodeformation of a three-dimensional geologic model allows us to explore the tectonic evolution of the Peninsula segment of the San Andreas Fault and adjacent rock bodies in the San Francisco Bay area. By using geological constraints to quantitatively retrodeform specific surfaces (e.g. unfolding paleohorizontal horizons, removing fault slip), we evaluate the geometric evolution of rock bodies and faults in the study volume and effectively create a four-dimensional model of the geology. The three-dimensional map is divided into fault-bounded blocks and subdivided into lithologic units. Surface geologic mapping provides the foundation for the model. Structural analysis and well data allow extrapolation to a few kilometers depth. Geometries of active faults are inferred from double-difference relocated earthquake hypocenters. Gravity and magnetic data provide constraints on the geometries of low density Cenozoic deposits on denser basement, highly magnetic marker units, and adjacent faults. Existing seismic refraction profiles constrain the geometries of rock bodies with different seismic velocities. Together these datasets and others allow us to construct a model of first-order geologic features in the upper ~15 km of the crust. Major features in the model include the active San Andreas Fault surface; the Pilarcitos Fault, an abandoned strand of the San Andreas; an active NE-vergent fold and thrust belt located E of the San Andreas Fault; regional relief on the basement surface; and several Cenozoic syntectonic basins. Retrodeformation of these features requires constraints from all available datasets (structure, geochronology, paleontology, etc.). Construction of the three-dimensional model and retrodeformation scenarios are non-unique, but significant insights follow from restricting the range of possible geologic histories. For example, we use the model to investigate how the crust responded to migration of the principal slip surface from the Pilarcitos Fault

  5. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    Science.gov (United States)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  6. A fault runs through it: Modeling the influence of rock strength and grain-size distribution in a fault-damaged landscape

    Science.gov (United States)

    Roy, S. G.; Tucker, G. E.; Koons, P. O.; Smith, S. M.; Upton, P.

    2016-10-01

    We explore two ways in which the mechanical properties of rock potentially influence fluvial incision and sediment transport within a watershed: rock erodibility is inversely proportional to rock cohesion, and fracture spacing influences the initial grain sizes produced upon erosion. Fault-weakened zones show these effects well because of the sharp strength gradients associated with localized shear abrasion. A natural example of fault erosion is used to motivate our calibration of a generalized landscape evolution model. Numerical experiments are used to study the sensitivity of river erosion and transport processes to variable degrees of rock weakening. In the experiments, rapid erosion and transport of fault gouge steers surface runoff, causing high-order channels to become confined within the structure of weak zones when the relative degree of rock weakening exceeds 1 order of magnitude. Erosion of adjacent, intact bedrock produces relatively coarser grained gravels that accumulate in the low relief of the eroded weak zone. The thickness and residence time of sediments stored there depends on the relief of the valley, which in these models depends on the degree of rock weakening. The frequency with which the weak zone is armored by bed load increases with greater weakening, causing the bed load to control local channel slope. Conversely, small tributaries feeding into the weak zone are predominantly detachment limited. Our results indicate that mechanical heterogeneity can exert strong controls on rates and patterns of erosion and should be considered in future landscape evolution studies to better understand the role of heterogeneity in structuring landscapes.

  7. Sensor and Actuator Fault Detection and Isolation in Nonlinear System using Multi Model Adaptive Linear Kalman Filter

    Directory of Open Access Journals (Sweden)

    M. Manimozhi

    2014-05-01

    Full Text Available Fault Detection and Isolation (FDI using Linear Kalman Filter (LKF is not sufficient for effective monitoring of nonlinear processes. Most of the chemical plants are nonlinear in nature while operating the plant in a wide range of process variables. In this study we present an approach for designing of Multi Model Adaptive Linear Kalman Filter (MMALKF for Fault Detection and Isolation (FDI of a nonlinear system. The uses a bank of adaptive Kalman filter, with each model based on different fault hypothesis. In this study the effectiveness of the MMALKF has been demonstrated on a spherical tank system. The proposed method is detecting and isolating the sensor and actuator soft faults which occur sequentially or simultaneously.

  8. Micromechanics and statistics of slipping events in a granular seismic fault model

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, L de [Department of Information Engineering and CNISM, Second University of Naples, Aversa (Italy); Ciamarra, M Pica [CNR-SPIN, Dipartimento di Scienze Fisiche, Universita di Napoli Federico II (Italy); Lippiello, E; Godano, C, E-mail: dearcangelis@na.infn.it [Department of Environmental Sciences and CNISM, Second University of Naples, Caserta (Italy)

    2011-09-15

    The stick-slip is investigated in a seismic fault model made of a confined granular system under shear stress via three dimensional Molecular Dynamics simulations. We study the statistics of slipping events and, in particular, the dependence of the distribution on model parameters. The distribution consistently exhibits two regimes: an initial power law and a bump at large slips. The initial power law decay is in agreement with the the Gutenberg-Richter law characterizing real seismic occurrence. The exponent of the initial regime is quite independent of model parameters and its value is in agreement with experimental results. Conversely, the position of the bump is solely controlled by the ratio of the drive elastic constant and the system size. Large slips also become less probable in absence of fault gouge and tend to disappear for stiff drives. A two-time force-force correlation function, and a susceptibility related to the system response to pressure changes, characterize the micromechanics of slipping events. The correlation function unveils the micromechanical changes occurring both during microslips and slips. The mechanical susceptibility encodes the magnitude of the incoming microslip. Numerical results for the cellular-automaton version of the spring block model confirm the parameter dependence observed for size distribution in the granular model.

  9. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    Science.gov (United States)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  10. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    Science.gov (United States)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  11. Monitoring tooth profile faults in epicyclic gearboxes using synchronously averaged motor currents: Mathematical modeling and experimental validation

    Science.gov (United States)

    Ottewill, J. R.; Ruszczyk, A.; Broda, D.

    2017-02-01

    Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.

  12. Fault diagnosis of locomotive electro-pneumatic brake through uncertain bond graph modeling and robust online monitoring

    Science.gov (United States)

    Niu, Gang; Zhao, Yajun; Defoort, Michael; Pecht, Michael

    2015-01-01

    To improve reliability, safety and efficiency, advanced methods of fault detection and diagnosis become increasingly important for many technical fields, especially for safety related complex systems like aircraft, trains, automobiles, power plants and chemical plants. This paper presents a robust fault detection and diagnostic scheme for a multi-energy domain system that integrates a model-based strategy for system fault modeling and a data-driven approach for online anomaly monitoring. The developed scheme uses LFT (linear fractional transformations)-based bond graph for physical parameter uncertainty modeling and fault simulation, and employs AAKR (auto-associative kernel regression)-based empirical estimation followed by SPRT (sequential probability ratio test)-based threshold monitoring to improve the accuracy of fault detection. Moreover, pre- and post-denoising processes are applied to eliminate the cumulative influence of parameter uncertainty and measurement uncertainty. The scheme is demonstrated on the main unit of a locomotive electro-pneumatic brake in a simulated experiment. The results show robust fault detection and diagnostic performance.

  13. From detachment to transtensional faulting: A model for the Lake Mead extensional domain based on new ages and correlation of subbasins

    Science.gov (United States)

    Beard, L.; Umhoefer, P. J.; Martin, K. L.; Blythe, N.

    2007-12-01

    New studies of selected basins in the Miocene extensional belt of the northern Lake Mead domain suggest a new model for the early extensional history of the region (lower Horse Spring Formation and correlative strata). Critical data are from (i) Longwell Ridges area west of Overton Arm and within the Lake Mead fault system, (ii) Salt Spring Wash basin in the hanging wall of the South Virgin-White Hills detachment (SVWHD) fault, and (iii) previously studied subbasins of the south Virgin Mountains in the Gold Butte step-over region. The basins and faulting patterns suggest two stages of basin development related to two distinct faulting episodes, an early period of detachment faulting followed by a switch to faulting mainly along the Lake Mead transtensional fault system while detachment faulting waned. Apatite fission track ages suggest the footwall block of the SVWHD was cooling at 18-17 Ma, but the only evidence for basin deposition at that time is in the Gold Butte step-over where slow rates of sedimentation and facies patterns make faulting on the north side of the Gold Butte block ambiguous. The first basin stage was ca. 16.5 to 15.5 Ma, during which there was slow to moderate faulting and subsidence in a basin along the SVWHD and north of Gold Butte block in the Gold Butte step-over basin; the step- over basin had complex fluvial and lacustrine facies and was synchronous with landslides and debris flows in front of the SVWHD. At ca. 15.5-14.5 Ma, there was a [dramatic] increase in sedimentation rate related to formation of the Gold Butte fault, a change from lacustrine to widespread fluvial, playa, and local landslide facies in the step-over basin, and the peak of exhumation and faulting rates on the SVWHD. The simple step-over basin broke up into numerous subbasins [at[ as initial faults of the Lake Mead fault system formed. From 14.5 to 14.0 Ma, there was completion of a major change from dominantly detachment faulting to dominantly transtensional faulting

  14. Tsunamigenic earthquakes in the Gulf of Cadiz: fault model and recurrence

    Directory of Open Access Journals (Sweden)

    L. M. Matias

    2013-01-01

    Full Text Available The Gulf of Cadiz, as part of the Azores-Gibraltar plate boundary, is recognized as a potential source of big earthquakes and tsunamis that may affect the bordering countries, as occurred on 1 November 1755. Preparing for the future, Portugal is establishing a national tsunami warning system in which the threat caused by any large-magnitude earthquake in the area is estimated from a comprehensive database of scenarios. In this paper we summarize the knowledge about the active tectonics in the Gulf of Cadiz and integrate the available seismological information in order to propose the generation model of destructive tsunamis to be applied in tsunami warnings. The fault model derived is then used to estimate the recurrence of large earthquakes using the fault slip rates obtained by Cunha et al. (2012 from thin-sheet neotectonic modelling. Finally we evaluate the consistency of seismicity rates derived from historical and instrumental catalogues with the convergence rates between Eurasia and Nubia given by plate kinematic models.

  15. Nonlinear dynamic modeling of a helicopter planetary gear train for carrier plate crack fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    Fan Lei; Wang Shaoping; Wang Xingjian; Han Feng; Lyu Huawei

    2016-01-01

    Planetary gear train plays a significant role in a helicopter operation and its health is of great importance for the flight safety of the helicopter. This paper investigates the effects of a planet carrier plate crack on the dynamic characteristics of a planetary gear train, and thus finds an effec-tive method to diagnose crack fault. A dynamic model is developed to analyze the torsional vibra-tion of a planetary gear train with a cracked planet carrier plate. The model takes into consideration nonlinear factors such as the time-varying meshing stiffness, gear backlash and viscous damping. Investigation of the deformation of the cracked carrier plate under static stress is performed in order to simulate the dynamic effects of the planet carrier crack on the angular displacement of car-rier posts. Validation shows good accuracy of the developed dynamic model in predicting dynamic characteristics of a planetary gear train. Fault features extracted from predictions of the model reveal the correspondence between vibration characteristic and the conditions (length and position) of a planet carrier crack clearly.

  16. Nonlinear dynamic modeling of a helicopter planetary gear train for carrier plate crack fault diagnosis

    Directory of Open Access Journals (Sweden)

    Fan Lei

    2016-06-01

    Full Text Available Planetary gear train plays a significant role in a helicopter operation and its health is of great importance for the flight safety of the helicopter. This paper investigates the effects of a planet carrier plate crack on the dynamic characteristics of a planetary gear train, and thus finds an effective method to diagnose crack fault. A dynamic model is developed to analyze the torsional vibration of a planetary gear train with a cracked planet carrier plate. The model takes into consideration nonlinear factors such as the time-varying meshing stiffness, gear backlash and viscous damping. Investigation of the deformation of the cracked carrier plate under static stress is performed in order to simulate the dynamic effects of the planet carrier crack on the angular displacement of carrier posts. Validation shows good accuracy of the developed dynamic model in predicting dynamic characteristics of a planetary gear train. Fault features extracted from predictions of the model reveal the correspondence between vibration characteristic and the conditions (length and position of a planet carrier crack clearly.

  17. Geomechanical Modeling of Fault Responses and the Potential for Notable Seismic Events during Underground CO2 Injection

    Science.gov (United States)

    Rutqvist, J.; Cappa, F.; Mazzoldi, A.; Rinaldi, A.

    2012-12-01

    The importance of geomechanics associated with large-scale geologic carbon storage (GCS) operations is now widely recognized. There are concerns related to the potential for triggering notable (felt) seismic events and how such events could impact the long-term integrity of a CO2 repository (as well as how it could impact the public perception of GCS). In this context, we review a number of modeling studies and field observations related to the potential for injection-induced fault reactivations and seismic events. We present recent model simulations of CO2 injection and fault reactivation, including both aseismic and seismic fault responses. The model simulations were conducted using a slip weakening fault model enabling sudden (seismic) fault rupture, and some of the numerical analyses were extended to fully dynamic modeling of seismic source, wave propagation, and ground motion. The model simulations illustrated what it will take to create a magnitude 3 or 4 earthquake that would not result in any significant damage at the groundsurface, but could raise concerns in the local community and could also affect the deep containment of the stored CO2. The analyses show that the local in situ stress field, fault orientation, fault strength, and injection induced overpressure are critical factors in determining the likelihood and magnitude of such an event. We like to clarify though that in our modeling we had to apply very high injection pressure to be able to intentionally induce any fault reactivation. Consequently, our model simulations represent extreme cases, which in a real GCS operation could be avoided by estimating maximum sustainable injection pressure and carefully controlling the injection pressure. In fact, no notable seismic event has been reported from any of the current CO2 storage projects, although some unfelt microseismic activities have been detected by geophones. On the other hand, potential future commercial GCS operations from large power plants

  18. Imprecise Computation Based Real-time Fault Tolerant Implementation for Model Predictive Control

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is proposed for MPC,according to the solving process of quadratic programming (QP) problem. In this algorithm, system stability is guaranteed even when computation resource is not enough to finish optimization completely. By this kind of graceful degradation, the behavior of real-time control systems is still predictable and determinate. The algorithm is demonstrated by experiments on servomotor, and the simulation results show its effectiveness.

  19. Fault tree modeling of AAC power source in multi-unit nuclear power plants PSA

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Lim, Ho-Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    Dependencies between units are important to estimate a risk of a multi-unit site. One of dependencies is a shared system such as an alternating AC (AAC) power source. Because one AAC can support a single unit, it is necessary to appropriately treat such behavior of the AAC in multi-unit probabilistic safety assessment (PSA). The behavior of AAC in multi-unit site would show dynamic characteristics. For example, several units require the AAC at the same time. It is hard to decide which unit the AAC is connected to. It can vary depending on timing of station blackout (SBO), with time delay when emergency diesel generators fail while running. It is not easy to handle dynamic behavior using the static fault tree methodology. Typical way of estimating risk for multi-unit regarding to AAC is to assume that only one unit has AAC and the others does not. KIM calculates the risk for each unit and uses the average value from the results. Jung derives an equation to calculate the SBO frequency by considering all the combination of loss of offsite power and failure of emergency diesel generators in multi-unit site. It is also assumed that the AAC is connected to a pre-decided unit. We are developing a PSA model for multi-unit site for internal and external events. An extreme external hazard may result in loss of all offsite power in a site, where the appropriate modeling of an AAC becomes important. The static fault tree methodology is not good for dynamic situation. But, it can turn into a simple problem if an assumption is made: - The connecting order of AAC is pre-decided. This study provides an idea how to model AAC for each unit in the form of a fault tree, assuming the connecting order of AAC is given. This study illustrates how to model a fault tree for AAC in a multi-unit site. It provides an idea how to handle a shared system in multi-unit PSA, for such a case as loss of all offsite power in a site due to an extreme external hazard.

  20. Effect of Pore Pressure on Slip Failure of an Impermeable Fault: A Coupled Micro Hydro-Geomechanical Model

    Science.gov (United States)

    Yang, Z.; Juanes, R.

    2015-12-01

    The geomechanical processes associated with subsurface fluid injection/extraction is of central importance for many industrial operations related to energy and water resources. However, the mechanisms controlling the stability and slip motion of a preexisting geologic fault remain poorly understood and are critical for the assessment of seismic risk. In this work, we develop a coupled hydro-geomechanical model to investigate the effect of fluid injection induced pressure perturbation on the slip behavior of a sealing fault. The model couples single-phase flow in the pores and mechanics of the solid phase. Granular packs (see example in Fig. 1a) are numerically generated where the grains can be either bonded or not, depending on the degree of cementation. A pore network is extracted for each granular pack with pore body volumes and pore throat conductivities calculated rigorously based on geometry of the local pore space. The pore fluid pressure is solved via an explicit scheme, taking into account the effect of deformation of the solid matrix. The mechanics part of the model is solved using the discrete element method (DEM). We first test the validity of the model with regard to the classical one-dimensional consolidation problem where an analytical solution exists. We then demonstrate the ability of the coupled model to reproduce rock deformation behavior measured in triaxial laboratory tests under the influence of pore pressure. We proceed to study the fault stability in presence of a pressure discontinuity across the impermeable fault which is implemented as a plane with its intersected pore throats being deactivated and thus obstructing fluid flow (Fig. 1b, c). We focus on the onset of shear failure along preexisting faults. We discuss the fault stability criterion in light of the numerical results obtained from the DEM simulations coupled with pore fluid flow. The implication on how should faults be treated in a large-scale continuum model is also presented.

  1. Kinematic source model for simulation of near-fault ground motion field using explicit finite element method

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiaozhi; Hu Jinjun; Xie Lili; Wang Haiyun

    2006-01-01

    This paper briefly reviews the characteristics and major processes of the explicit finite element method in modeling the near-fault ground motion field. The emphasis is on the finite element-related problems in the finite fault source modeling. A modified kinematic source model is presented, in which vibration with some high frequency components is introduced into the traditional slip time function to ensure that the source and ground motion include sufficient high frequency components. The model presented is verified through a simple modeling example. It is shown that the predicted near-fault ground motion field exhibits similar characteristics to those observed in strong motion records, such as the hanging wall effect, vertical effect, fling step effect and velocity pulse effect, etc.

  2. Hierarchical Fault Diagnosis for a Hybrid System Based on a Multidomain Model

    Directory of Open Access Journals (Sweden)

    Jiming Ma

    2015-01-01

    Full Text Available The diagnosis procedure is performed by integrating three steps: multidomain modeling, event identification, and failure event classification. Multidomain model can describe the normal and fault behaviors of hybrid systems efficiently and can meet the diagnosis requirements of hybrid systems. Then the multidomain model is used to simulate and obtain responses under different failure events; the responses are further utilized as a priori information when training the event identification library. Finally, a brushless DC motor is selected as the study case. The experimental result indicates that the proposed method could identify the known and unknown failure events of the studied system. In particular, for a system with less response information under a failure event, the accuracy of diagnosis seems to be higher. The presented method integrates the advantages of current quantitative and qualitative diagnostic procedures and can distinguish between failures caused by parametric and abrupt structure faults. Another advantage of our method is that it can remember unknown failure types and automatically extend the adaptive resonance theory neural network library, which is extremely useful for complex hybrid systems.

  3. Estimation of fault parameters using GRACE observations and analytical model. Case study: The 2010 Chile earthquake

    Science.gov (United States)

    Fatolazadeh, Farzam; Naeeni, Mehdi Raoofian; Voosoghi, Behzad; Rahimi, Armin

    2017-07-01

    In this study, an inversion method is used to constrain the fault parameters of the 2010 Chile Earthquake using gravimetric observations. The formulation consists of using monthly Geopotential coefficients of GRACE observations in a conjunction with the analytical model of Okubo 1992 which accounts for the gravity changes resulting from Earthquake. At first, it is necessary to eliminate the hydrological and oceanic effects from GRACE monthly coefficients and then a spatio-spectral localization analysis, based on wavelet local analysis, should be used to filter the GRACE observations and to better refine the tectonic signal. Finally, the corrected GRACE observations are compared with the analytical model using a nonlinear inversion algorithm. Our results show discernible differences between the computed average slip using gravity observations and those predicted from other co-seismic models. In this study, fault parameters such as length, width, depth, dip, strike and slip are computed using the changes in gravity and gravity gradient components. By using the variations of gravity gradient components the above mentioned parameters are determined as 428 ± 6 Km, 203 ± 5 Km, 5 Km, 10°, 13° and 8 ± 1.2 m respectively. Moreover, the values of the seismic moment and moment magnitude are 2. 09 × 1022 N m and 8.88 Mw respectively which show the small differences with the values reported from USGS (1. 8 × 1022N m and 8.83 Mw).

  4. Electrical and thermal finite element modeling of arc faults in photovoltaic bypass diodes.

    Energy Technology Data Exchange (ETDEWEB)

    Bower, Ward Isaac; Quintana, Michael A.; Johnson, Jay

    2012-01-01

    Arc faults in photovoltaic (PV) modules have caused multiple rooftop fires. The arc generates a high-temperature plasma that ignites surrounding materials and subsequently spreads the fire to the building structure. While there are many possible locations in PV systems and PV modules where arcs could initiate, bypass diodes have been suspected of triggering arc faults in some modules. In order to understand the electrical and thermal phenomena associated with these events, a finite element model of a busbar and diode was created. Thermoelectrical simulations found Joule and internal diode heating from normal operation would not normally cause bypass diode or solder failures. However, if corrosion increased the contact resistance in the solder connection between the busbar and the diode leads, enough voltage potentially would be established to arc across micron-scale electrode gaps. Lastly, an analytical arc radiation model based on observed data was employed to predicted polymer ignition times. The model predicted polymer materials in the adjacent area of the diode and junction box ignite in less than 0.1 seconds.

  5. Fault detection and identification in dynamic systems with noisy data and parameter/modeling uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Dinca, Laurian; Aldemir, Tunc; Rizzoni, Giorgio

    1999-06-01

    A probabilistic approach is presented which can be used for the estimation of system parameters and unmonitored state variables towards model-based fault diagnosis in dynamic systems. The method can be used with any type of input-output model and can accommodate noisy data and/or parameter/modeling uncertainties. The methodology is based on Markovian representation of system dynamics in discretized state space. The example system used for the illustration of the methodology focuses on the intake, fueling, combustion and exhaust components of internal combustion engines. The results show that the methodology is capable of estimating the system parameters and tracking the unmonitored dynamic variables within user-specified magnitude intervals (which may reflect noise in the monitored data, random changes in the parameters or modeling uncertainties in general) within data collection time and hence has potential for on-line implementation.

  6. A numerical modelling approach to investigate the surface processes response to normal fault growth in multi-rift settings

    Science.gov (United States)

    Pechlivanidou, Sofia; Cowie, Patience; Finch, Emma; Gawthorpe, Robert; Attal, Mikael

    2016-04-01

    This study uses a numerical modelling approach to explore structural controls on erosional/depositional systems within rifts that are characterized by complex multiphase extensional histories. Multiphase-rift related topography is generated by a 3D discrete element model (Finch et al., Basin Res., 2004) of normal fault growth and is used to drive the landscape evolution model CHILD (Tucker et al., Comput. Geosci., 2001). Fault populations develop spontaneously in the discrete element model and grow by both tip propagation and segment linkage. We conduct a series of experiments to simulate the evolution of the landscape (55x40 km) produced by two extensional phases that differ in the direction and in the amount of extension. In order to isolate the effects of fault propagation on the drainage network development, we conduct experiments where uplift/subsidence rates vary both in space and time as the fault array evolves and compare these results with experiments using a fixed fault array geometry with uplift rate/subsidence rates that vary only spatially. In many cases, areas of sediment deposition become uplifted and vise-versa due to complex elevation changes with respect to sea level as the fault array develops. These changes from subaerial (erosional) to submarine (depositional) processes have implications for sediment volumes and sediment caliber as well as for the sediment routing systems across the rift. We also explore the consequences of changing the angle between the two phases of extension on the depositional systems and we make a comparison with single-phase rift systems. Finally, we discuss the controls of different erodibilities on sediment supply and detachment-limited versus transport-limited end-member models for river erosion. Our results provide insights into the nature and distribution of sediment source areas and the sediment routing in rift systems where pre-existing rift topography and normal fault growth exert a fundamental control on

  7. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  8. Modeling the Fault Tolerant Capability of a Flight Control System: An Exercise in SCR Specification

    Science.gov (United States)

    Alexander, Chris; Cortellessa, Vittorio; DelGobbo, Diego; Mili, Ali; Napolitano, Marcello

    2000-01-01

    In life-critical and mission-critical applications, it is important to make provisions for a wide range of contingencies, by providing means for fault tolerance. In this paper, we discuss the specification of a flight control system that is fault tolerant with respect to sensor faults. Redundancy is provided by analytical relations that hold between sensor readings; depending on the conditions, this redundancy can be used to detect, identify and accommodate sensor faults.

  9. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than

  10. An objective mechanical modelling approach for estimating the distribution of fault creep and locking from geodetic data

    Science.gov (United States)

    Funning, Gareth; Burgmann, Roland

    2017-04-01

    Knowledge of the extents of locked areas on faults is a critical input to seismic hazard assessments, defining possible asperities for future earthquakes. On partially creeping faults, such as those found in California, Turkey and in several major subduction zones, these locked zones can be identified by studying the distribution and extent of creep on those faults. Such creep produces surface deformation that can be measured geodetically (e.g. by InSAR and GPS), and used as a constraint on geophysical models. We present a Markov Chain Monte Carlo method, based on mechanical boundary element modelling of geodetic data, for finding the extents of creeping fault areas. In our scheme, the surface of a partially-creeping fault is represented as a mesh of triangular elements, each of which is modelled as either locked or creeping (freely-slipping) using the boundary element code poly3d. Slip on the creeping elements of our fault mesh, and therefore elastic deformation of the surface, is driven by stresses imparted by semi-infinite faults beneath the base of the mesh (and any other faults in the region of interest) that slip at their geodetic interseismic slip rates. Starting from a random distribution of locked and unlocked patches, a modified Metropolis algorithm is used to propose changes to the locking state (i.e., from locked to creeping, or vice-versa) of randomly selected elements, retaining or discarding these based on a geodetic data misfit criterion; the succession of accepted models forms a Markov chain of model states. After a 'burn-in' period of a few hundred samples, these Markov chains sample a region of parameter space close to the minimum misfit configuration. By computing Markov chains of a million samples, we can realise multiple such well-fitting models, and look for robustly resolved features (i.e., features common to a majority of the models, and/or present in the mean of those models). We apply this method to a combination of persistent scatterer

  11. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    Science.gov (United States)

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  12. Tectonic role of margin-parallel and margin-transverse faults during oblique subduction in the Southern Volcanic Zone of the Andes: Insights from Boundary Element Modeling

    Science.gov (United States)

    Stanton-Yonge, A.; Griffith, W. A.; Cembrano, J.; St. Julien, R.; Iturrieta, P.

    2016-09-01

    Obliquely convergent subduction margins develop trench-parallel faults shaping the regional architecture of orogenic belts and partitioning intraplate deformation. However, transverse faults also are common along most orogenic belts and have been largely neglected in slip partitioning analysis. Here we constrain the sense of slip and slip rates of differently oriented faults to assess whether and how transverse faults accommodate plate-margin slip arising from oblique subduction. We implement a forward 3-D boundary element method model of subduction at the Chilean margin evaluating the elastic response of intra-arc faults during different stages of the Andean subduction seismic cycle (SSC). Our model results show that the margin-parallel, NNE striking Liquiñe-Ofqui Fault System accommodates dextral-reverse slip during the interseismic period of the SSC, with oblique slip rates ranging between 1 and 7 mm/yr. NW striking faults exhibit sinistral-reverse slip during the interseismic phase of the SSC, displaying a maximum oblique slip of 1.4 mm/yr. ENE striking faults display dextral strike slip, with a slip rate of 0.85 mm/yr. During the SSC coseismic phase, all modeled faults switch their kinematics: NE striking fault become sinistral, whereas NW striking faults are normal dextral. Because coseismic tensile stress changes on NW faults reach 0.6 MPa at 10-15 km depth, it is likely that they can serve as transient magma pathways during this phase of the SSC. Our model challenges the existing paradigm wherein only margin-parallel faults account for slip partitioning: transverse faults are also capable of accommodating a significant amount of plate-boundary slip arising from oblique convergence.

  13. 3D Modelling of Seismically Active Parts of Underground Faults via Seismic Data Mining

    Science.gov (United States)

    Frantzeskakis, Theofanis; Konstantaras, Anthony

    2015-04-01

    During the last few years rapid steps have been taken towards drilling for oil in the western Mediterranean sea. Since most of the countries in the region benefit mainly from tourism and considering that the Mediterranean is a closed sea only replenishing its water once every ninety years careful measures are being taken to ensure safe drilling. In that concept this research work attempts to derive a three dimensional model of the seismically active parts of the underlying underground faults in areas of petroleum interest. For that purpose seismic spatio-temporal clustering has been applied to seismic data to identify potential distinct seismic regions in the area of interest. Results have been coalesced with two dimensional maps of underground faults from past surveys and seismic epicentres, having followed careful reallocation processing, have been used to provide information regarding the vertical extent of multiple underground faults in the region of interest. The end product is a three dimensional map of the possible underground location and extent of the seismically active parts of underground faults. Indexing terms: underground faults modelling, seismic data mining, 3D visualisation, active seismic source mapping, seismic hazard evaluation, dangerous phenomena modelling Acknowledgment This research work is supported by the ESPA Operational Programme, Education and Life Long Learning, Students Practical Placement Initiative. References [1] Alves, T.M., Kokinou, E. and Zodiatis, G.: 'A three-step model to assess shoreline and offshore susceptibility to oil spills: The South Aegean (Crete) as an analogue for confined marine basins', Marine Pollution Bulletin, In Press, 2014 [2] Ciappa, A., Costabile, S.: 'Oil spill hazard assessment using a reverse trajectory method for the Egadi marine protected area (Central Mediterranean Sea)', Marine Pollution Bulletin, vol. 84 (1-2), pp. 44-55, 2014 [3] Ganas, A., Karastathis, V., Moshou, A., Valkaniotis, S., Mouzakiotis

  14. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  15. Performance analysis of a dependable scheduling strategy based on a fault-tolerant grid model

    Institute of Scientific and Technical Information of China (English)

    WANG Yuanzhuo; LIN Chuang; YANG Yang; SHAN Zhiguang

    2007-01-01

    The grid provides an integrated computer platform composed of differentiated and distributed systems.These resources are dynamic and heterogeneous.In this paper,a novel fault-tolerant grid-scheduling model is pre sented based on Stochastic Petri Nets (SPN) to assure the heterogeneity and dynamism of the grid system.Also,a new grid-scheduling strategy,the dependable strategy for the shortest expected accomplishing time (DSEAT),is put forward,in which the dependability factor is introduced in the task-dispatching strategy.In the end,the performance of the scheduling strategy based on the fault-tolerant gridscheduling model is analyzed by an software package,named SPNP.The numerical results show that dynamic resources will increase the response time for all classes of tasks in differing degrees.Compared with shortest expected accomplishing time (SEAT) strategy,the DSEAT strategy can reduce the negative effects of dynamic and autonomic resources to some extent so as to guarantee a high quality of service (QoS).

  16. Numerical Modeling of Exploitation Relics and Faults Influence on Rock Mass Deformations

    Science.gov (United States)

    Wesołowski, Marek

    2016-12-01

    This article presents numerical modeling results of fault planes and exploitation relics influenced by the size and distribution of rock mass and surface area deformations. Numerical calculations were performed using the finite difference program FLAC. To assess the changes taking place in a rock mass, an anisotropic elasto-plastic ubiquitous joint model was used, into which the Coulomb-Mohr strength (plasticity) condition was implemented. The article takes as an example the actual exploitation of the longwall 225 area in the seam 502wg of the "Pokój" coal mine. Computer simulations have shown that it is possible to determine the influence of fault planes and exploitation relics on the size and distribution of rock mass and its surface deformation. The main factor causing additional deformations of the area surface are the abandoned workings in the seam 502wd. These abandoned workings are the activation factor that caused additional subsidences and also, due to the significant dip, they are a layer on which the rock mass slides down in the direction of the extracted space. These factors are not taken into account by the geometrical and integral theories.

  17. Backpropagation Neural Network Modeling for Fault Location in Transmission Line 150 kV

    Directory of Open Access Journals (Sweden)

    Azriyenni Narwan

    2014-03-01

    Full Text Available In this topic research was provided about the backpropagation neural network to detect fault location in transmission line 150 kV between substation to substation. The distance relay is one of the good protective device and safety devices that often used on transmission line 150 kV. The disturbances in power system are used distance relay protection equipment in the transmission line. However, it needs more increasing large load and network systems are increasing complex. The protection system use the digital control, in order to avoid the error calculation of the distance relay impedance settings and spent time will be more efficient. Then backpropagation neural network is a computational model that uses the training process that can be used to solve the problem of work limitations of distance protection relays. The backpropagation neural network does not have limitations cause of the impedance range setting. If the output gives the wrong result, so the correct of the weights can be minimized and also the response of galat, the backpropagation neural network is expected to be closer to the correct value. In the end, backpropagation neural network modeling is expected to detect the fault location and identify operational output current circuit breaker was tripped it. The tests are performance with interconnected system 150 kV of Riau Region.

  18. Design of sensor and actuator multi model fault detection and isolation system using state space neural networks

    Science.gov (United States)

    Czajkowski, Andrzej

    2015-11-01

    This paper deals with the application of state space neural network model to design a Fault Detection and Isolation diagnostic system. The work describes approach based on multimodel solution where the SIMO process is decomposed into simple models (SISO and MISO). With such models it is possible to generate different residual signals which later can be evaluated with simple thresholding method into diagnostic signals. Further, such diagnostic signals with the application of Binary Diagnostic Table (BDT) can be used to fault isolation. All data used in experiments is obtain from the simulator of the real-time laboratory stand of Modular Servo under Matlab/Simulink environment.

  19. A Poisson-Fault Model for Testing Power Transformers in Service

    Directory of Open Access Journals (Sweden)

    Dengfu Zhao

    2014-01-01

    Full Text Available This paper presents a method for assessing the instant failure rate of a power transformer under different working conditions. The method can be applied to a dataset of a power transformer under periodic inspections and maintenance. We use a Poisson-fault model to describe failures of a power transformer. When investigating a Bayes estimate of the instant failure rate under the model, we find that complexities of a classical method and a Monte Carlo simulation are unacceptable. Through establishing a new filtered estimate of Poisson process observations, we propose a quick algorithm of the Bayes estimate of the instant failure rate. The proposed algorithm is tested by simulation datasets of a power transformer. For these datasets, the proposed estimators of parameters of the model have better performance than other estimators. The simulation results reveal the suggested algorithms are quickest among three candidates.

  20. Takagi-Sugeno fuzzy-model-based fault detection for networked control systems with Markov delays.

    Science.gov (United States)

    Zheng, Ying; Fang, Huajing; Wang, Hua O

    2006-08-01

    A Takagi-Sugeno (T-S) model is employed to represent a networked control system (NCS) with different network-induced delays. Comparing with existing NCS modeling methods, this approach does not require the knowledge of exact values of network-induced delays. Instead, it addresses situations involving all possible network-induced delays. Moreover, this approach also handles data-packet loss. As an application of the T-S-based modeling method, a parity-equation approach and a fuzzy-observer-based approach for fault detection of an NCS were developed. An example of a two-link inverted pendulum is used to illustrate the utility and viability of the proposed approaches.

  1. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    Science.gov (United States)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  2. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    Directory of Open Access Journals (Sweden)

    Fang Liu

    2014-05-01

    Full Text Available A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects.

  3. Wayside bearing fault diagnosis based on a data-driven Doppler effect eliminator and transient model analysis.

    Science.gov (United States)

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-05-05

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects.

  4. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  5. Progressive Development of Riedel-Shear on Overburden Soil by Strike-Slip Faulting: Insights from Analogue Model

    Science.gov (United States)

    Chan, Pei-Chen; Wong, Pei-Syuan; Lin, Ming-Lang

    2015-04-01

    According to the investigations of well-known disastrous earthquakes in recent years, ground deformation (ground strain and surface rupture) induced by faulting is one of the causes for engineering structure damages in addition to strong ground motion. However, development and propagation of shear zone were effect of increasing amounts of basal slip faulting. Therefore, mechanisms of near ground deformation due to faulting, and its effect on engineering structures within the influenced zone are worthy of further study. In strike-slip faults model, type of rupture propagation and width of shear zone (W) are primary affecting by material properties (M) and depth (H) of overburden layer, distances of fault slip (Sy) (Lin, A., and Nishikawa, M.,2011, Narges K. et al, 2014). There are few research on trace of development and propagation of trace tip, trace length, and rupture spacing. In this research, we used sandbox model to study the progressive development of riedel-shear on overburden soil by strike-slip faulting. The model can be used to investigate the control factors of the deformation characteristics (such as the evolution of surface rupture). To understand the deformation characteristics (including development and propagation of trace tip(Tt), trace length(Tl), rupture spacing(Ts)) during the early stages of deformation by faulting. We found that an increase in fault slip Sy could result in a greater W, trace length, rupture density and proposed a Tl/H versus Sy/H relationship. Progressive development of riedel-shear showed a similar trend as in the literature that the increase of fault slip resulted in the reduction of Ts, however, the increasing trend became opposite after a peak value of W was reached. The above approaches benefit us in enhancing our understanding on how propagation of fault-tip affects the width of deformation zone near the ground of the soil/rock mass, the spatial distribution of strain and stress within the influenced zone, and the

  6. A renormalization group model for the stick-slip behavior of faults

    Science.gov (United States)

    Smalley, R. F., Jr.; Turcotte, D. L.; Solla, S. A.

    1983-01-01

    A fault which is treated as an array of asperities with a prescribed statistical distribution of strengths is described. For a linear array the stress is transferred to a single adjacent asperity and for a two dimensional array to three ajacent asperities. It is shown that the solutions bifurcate at a critical applied stress. At stresses less than the critical stress virtually no asperities fail on a large scale and the fault is locked. At the critical stress the solution bifurcates and asperity failure cascades away from the nucleus of failure. It is found that the stick slip behavior of most faults can be attributed to the distribution of asperities on the fault. The observation of stick slip behavior on faults rather than stable sliding, why the observed level of seismicity on a locked fault is very small, and why the stress on a fault is less than that predicted by a standard value of the coefficient of friction are outlined.

  7. Quantizing the Complexity of the Western United States Fault System with Geodetically and Geologically Constrained Block Models

    Science.gov (United States)

    Evans, E. L.; Meade, B. J.

    2014-12-01

    Geodetic observations of interseismic deformation provide constraints on miroplate rotations, earthquake cycle processes, slip partitioning, and the geometric complexity of the Pacific-North America plate boundary. Paleoseismological observations in the western United States provide a complimentary dataset of Quaternary fault slip rate estimates. These measurements may be integrated and interpreted using block models, in which the upper crust is divided into microplates bounded by mapped faults, with slip rates defined by the differential relative motions of adjacent microplates. The number and geometry of microplates are typically defined with boundaries representing a limited sub-set of the large number of potentially seismogenic faults. An alternative approach is to include large number of potentially active faults in a dense array of microplates, and then deterministically estimate the boundaries at which strain is localized, while simultaneously satisfying interseismic geodetic and geologic observations. This approach is possible through the application of total variation regularization (TVR) which simultaneously minimizes the L2 norm of data residuals and the L1 norm of the variation in the estimated state vector. Applied to three-dimensional spherical block models, TVR reduces the total variation between estimated rotation vectors, creating groups of microplates that rotate together as larger blocks, and localizing fault slip on the boundaries of these larger blocks. Here we consider a suite of block models containing 3-137 microplates, where active block boundaries have been determined by TVR optimization constrained by both interseismic GPS velocities and geologic slip rate estimates.

  8. A Doppler Transient Model Based on the Laplace Wavelet and Spectrum Correlation Assessment for Locomotive Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Changqing Shen

    2013-11-01

    Full Text Available The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.

  9. Modeling of fluid injection and withdrawal induced fault activation using discrete element based hydro-mechanical and dynamic coupled simulator

    Science.gov (United States)

    Yoon, Jeoung Seok; Zang, Arno; Zimmermann, Günter; Stephansson, Ove

    2016-04-01

    Operation of fluid injection into and withdrawal from the subsurface for various purposes has been known to induce earthquakes. Such operations include hydraulic fracturing for shale gas extraction, hydraulic stimulation for Enhanced Geothermal System development and waste water disposal. Among these, several damaging earthquakes have been reported in the USA in particular in the areas of high-rate massive amount of wastewater injection [1] mostly with natural fault systems. Oil and gas production have been known to induce earthquake where pore fluid pressure decreases in some cases by several tens of Mega Pascal. One recent seismic event occurred in November 2013 near Azle, Texas where a series of earthquakes began along a mapped ancient fault system [2]. It was studied that a combination of brine production and waste water injection near the fault generated subsurface pressures sufficient to induced earthquakes on near-critically stressed faults. This numerical study aims at investigating the occurrence mechanisms of such earthquakes induced by fluid injection [3] and withdrawal by using hydro-geomechanical coupled dynamic simulator (Itasca's Particle Flow Code 2D). Generic models are setup to investigate the sensitivity of several parameters which include fault orientation, frictional properties, distance from the injection well to the fault, amount of fluid withdrawal around the injection well, to the response of the fault systems and the activation magnitude. Fault slip movement over time in relation to the diffusion of pore pressure is analyzed in detail. Moreover, correlations between the spatial distribution of pore pressure change and the locations of induced seismic events and fault slip rate are investigated. References [1] Keranen KM, Weingarten M, Albers GA, Bekins BA, Ge S, 2014. Sharp increase in central Oklahoma seismicity since 2008 induced by massive wastewater injection, Science 345, 448, DOI: 10.1126/science.1255802. [2] Hornbach MJ, DeShon HR

  10. Safety in the event of an Internal fault: modelling or tests?

    Energy Technology Data Exchange (ETDEWEB)

    Duquerroy, P. [Electricte de France (France); Friberg, G.; Pietsch, G. [Aachen Univ. of Tech. (Germany); Herault, C.; Chevrier, P. [Schneider Electric (France)

    1997-12-31

    To facilitate the integration into the environment of public distribution MV/LV substations installed by EDF, their size has been divided by 5 since 1950. In addition to this trend there is a notable increase in the short-circuit power, to improve the quality of electricity, and a desire for constant improvement of the safety of persons and property. In this context, the development of new MV/LV substations and MV switchgear cubicles, and EDF`s aim to bring into general use and harmonise internal fault protections led us to carry out an investigation into internal arcing phenomena and their consequences. In collaboration with the Aachen university of Technologie RWHT and Schneider Electric, we created different forms of modelling which were validated by a series of power tests. This enabled us to gain a better understanding of the physical phenomena associated with internal faults, in particular the increase in pressure, in order to specify electrical devices and their type tests with maximum efficiency. (UK)

  11. Fault-tree Models of Accident Scenarios of RoPax Vessels

    Institute of Scientific and Technical Information of China (English)

    Pedro Ant(a)o; C. Guedes Soares

    2006-01-01

    Ro-Ro vessels for cargo and passengers (RoPax) are a relatively new concept that has proven to be popular in the Mediterranean region and is becoming more widespread in Northern Europe. Due to its design characteristics and amount of passengers, although less than a regular passenger liner, accidents with RoPax vessels have far reaching consequences both for economical and for human life. The objective of this paper is to identify hazards related to casualties of RoPax vessels. The terminal casualty events chosen are related to accident and incident statistics for this type of vessel. This paper focuses on the identification of the basic events that can lead to an accident and the performance requirements. The hazard identification is carried out as the first step of a Formal Safety Assessment (FSA) and the modelling of the relation between the relevant events is made using Fault Tree Analysis (FTA). The conclusions of this study are recommendations to the later steps of FSA rather than for decision making (Step 5 of FSA). These recommendations will be focused on the possible design shortcomings identified during the analysis by fault trees throughout cut sets. Also the role that human factors have is analysed through a sensitivity analysis where it is shown that their influence is higher for groundings and collisions where an increase of the initial probability leads to the change of almost 90% of the accident occurrence.

  12. Major Fault Patterns in Zanjan State of Iran Based of GECO Global Geoid Model

    Science.gov (United States)

    Beheshty, Sayyed Amir Hossein; Abrari Vajari, Mohammad; Raoufikelachayeh, SeyedehSusan

    2016-04-01

    A new Earth Gravitational Model (GECO) to degree 2190 has been developed incorporates EGM2008 and the latest GOCE based satellite solutions. Satellite gradiometry data are more sensitive information of the long- and medium- wavelengths of the gravity field than the conventional satellite tracking data. Hence, by utilizing this new technique, more accurate, reliable and higher degrees/orders of the spherical harmonic expansion of the gravity field can be achieved. Gravity gradients can also be useful in geophysical interpretation and prospecting. We have presented the concept of gravity gradients with some simple interpretations. A MATLAB based computer programs were developed and utilized for determining the gravity and gradient components of the gravity field using the GGMs, followed by a case study in Zanjan State of Iran. Our numerical studies show strong (more than 72%) correlations between gravity anomalies and the diagonal elements of the gradient tensor. Also, strong correlations were revealed between the components of the deflection of vertical and the off-diagonal elements as well as between the horizontal gradient and magnitude of the deflection of vertical. We clearly distinguished two big faults in North and South of Zanjan city based on the current information. Also, several minor faults were detected in the study area. Therefore, the same geophysical interpretation can be stated for gravity gradient components too. Our mathematical derivations support some of these correlations.

  13. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    Directory of Open Access Journals (Sweden)

    Axel Waggershauser

    2008-11-01

    Full Text Available We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the use of image processing routines for robot control within the simulation. Specific error models for actuators, distance sensors, camera sensor, and wireless communication have been implemented. Progressively increasing error levels for an application program allows for testing and improving its robustness and fault-tolerance.

  14. Comparative modeling of fault reactivation and seismicity in geologic carbon storage and shale-gas reservoir stimulation

    Science.gov (United States)

    Rutqvist, Jonny; Rinaldi, Antonio; Cappa, Frederic

    2016-04-01

    The potential for fault reactivation and induced seismicity are issues of concern related to both geologic CO2 sequestration and stimulation of shale-gas reservoirs. It is well known that underground injection may cause induced seismicity depending on site-specific conditions, such a stress and rock properties and injection parameters. To date no sizeable seismic event that could be felt by the local population has been documented associated with CO2 sequestration activities. In the case of shale-gas fracturing, only a few cases of felt seismicity have been documented out of hundreds of thousands of hydraulic fracturing stimulation stages. In this paper we summarize and review numerical simulations of injection-induced fault reactivation and induced seismicity associated with both underground CO2 injection and hydraulic fracturing of shale-gas reservoirs. The simulations were conducted with TOUGH-FLAC, a simulator for coupled multiphase flow and geomechanical modeling. In this case we employed both 2D and 3D models with an explicit representation of a fault. A strain softening Mohr-Coulomb model was used to model a slip-weakening fault slip behavior, enabling modeling of sudden slip that was interpreted as a seismic event, with a moment magnitude evaluated using formulas from seismology. In the case of CO2 sequestration, injection rates corresponding to expected industrial scale CO2 storage operations were used, raising the reservoir pressure until the fault was reactivated. For the assumed model settings, it took a few months of continuous injection to increase the reservoir pressure sufficiently to cause the fault to reactivate. In the case of shale-gas fracturing we considered that the injection fluid during one typical 3-hour fracturing stage was channelized into a fault along with the hydraulic fracturing process. Overall, the analysis shows that while the CO2 geologic sequestration in deep sedimentary formations are capable of producing notable events (e

  15. A Kinematic Fault Network Model of Crustal Deformation for California and Its Application to the Seismic Hazard Analysis

    Science.gov (United States)

    Zeng, Y.; Shen, Z.; Harmsen, S.; Petersen, M. D.

    2010-12-01

    We invert GPS observations to determine the slip rates on major faults in California based on a kinematic fault model of crustal deformation with geological slip rate constraints. Assuming an elastic half-space, we interpret secular surface deformation using a kinematic fault network model with each fault segment slipping beneath a locking depth. This model simulates both block-like deformation and elastic strain accumulation within each bounding block. Each fault segment is linked to its adjacent elements with slip continuity imposed at fault nodes or intersections. The GPS observations across California and its neighbors are obtained from the SCEC WGCEP project of California Crustal Motion Map version 1.0 and SCEC Crustal Motion Map 4.0. Our fault models are based on the SCEC UCERF 2.0 fault database, a previous southern California block model by Shen and Jackson, and the San Francisco Bay area block model by d’Alessio et al. Our inversion shows a slip rate ranging from 20 to 26 mm/yr for the northern San Andreas from the Santa Cruz Mountain to the Peninsula segment. Slip rates vary from 8 to 14 mm/yr along the Hayward to the Maacama segment, and from 17 to 6 mm/yr along the central Calaveras to West Napa. For the central California creeping section, we find a depth dependent slip rate with an average slip rate of 23 mm/yr across the upper 5 km and 30 mm/yr underneath. Slip rates range from 30 mm/yr along the Parkfield and central California creeping section of the San Andres to an average of 6 mm/yr on the San Bernardino Mountain segment. On the southern San Andreas, slip rates vary from 21 to 30 mm/yr from the Cochella Valley to the Imperial Valley, and from 7 to 16 mm/yr along the San Jacinto segments. The shortening rate across the greater Los Angeles region is consistent with the regional tectonics and crustal thickening in the area. We are now in the process of applying the result to seismic hazard evaluation. Overall the geodetic and geological derived

  16. Clustering diagnosis of rolling element bearing fault based on integrated Autoregressive/Autoregressive Conditional Heteroscedasticity model

    Science.gov (United States)

    Wang, Guofeng; Liu, Chang; Cui, Yinhu

    2012-09-01

    Feature extraction plays an important role in the clustering analysis. In this paper an integrated Autoregressive (AR)/Autoregressive Conditional Heteroscedasticity (ARCH) model is proposed to characterize the vibration signal and the model coefficients are adopted as feature vectors to realize clustering diagnosis of rolling element bearings. The main characteristic is that the AR item and ARCH item are interrelated with each other so that it can depict the excess kurtosis and volatility clustering information in the vibration signal more accurately in comparison with two-stage AR/ARCH model. To testify the correctness, four kinds of bearing signals are adopted for parametric modeling by using the integrated and two-stage AR/ARCH model. The variance analysis of the model coefficients shows that the integrated AR/ARCH model can get more concentrated distribution. Taking these coefficients as feature vectors, K means based clustering is utilized to realize the automatic classification of bearing fault status. The results show that the proposed method can get more accurate results in comparison with two-stage model and discrete wavelet decomposition.

  17. New insights on stress rotations from a forward regional model of the San Andreas fault system near its Big Bend in southern California

    Science.gov (United States)

    Fitzenz, D.D.; Miller, S.A.

    2004-01-01

    Understanding the stress field surrounding and driving active fault systems is an important component of mechanistic seismic hazard assessment. We develop and present results from a time-forward three-dimensional (3-D) model of the San Andreas fault system near its Big Bend in southern California. The model boundary conditions are assessed by comparing model and observed tectonic regimes. The model of earthquake generation along two fault segments is used to target measurable properties (e.g., stress orientations, heat flow) that may allow inferences on the stress state on the faults. It is a quasi-static model, where GPS-constrained tectonic loading drives faults modeled as mostly sealed viscoelastic bodies embedded in an elastic half-space subjected to compaction and shear creep. A transpressive tectonic regime develops southwest of the model bend as a result of the tectonic loading and migrates toward the bend because of fault slip. The strength of the model faults is assessed on the basis of stress orientations, stress drop, and overpressures, showing a departure in the behavior of 3-D finite faults compared to models of 1-D or homogeneous infinite faults. At a smaller scale, stress transfers from fault slip transiently induce significant perturbations in the local stress tensors (where the slip profile is very heterogeneous). These stress rotations disappear when subsequent model earthquakes smooth the slip profile. Maps of maximum absolute shear stress emphasize both that (1) future models should include a more continuous representation of the faults and (2) that hydrostatically pressured intact rock is very difficult to break when no material weakness is considered. Copyright 2004 by the American Geophysical Union.

  18. Discrete element modeling of Martian pit crater formation in response to extensional fracturing and dilational normal faulting

    Science.gov (United States)

    Smart, Kevin J.; Wyrick, Danielle Y.; Ferrill, David A.

    2011-04-01

    Pit craters, circular to elliptical depressions that lack a raised rim or ejecta deposits, are common on the surface of Mars. Similar structures are also found on Earth, Venus, the Moon, and smaller planetary bodies, including some asteroids. While it is generally accepted that these pits form in response to material drainage into a subsurface void space, the primary mechanism(s) responsible for creating the void is a subject of debate. Previously proposed mechanisms include collapse into lave tubes, dike injection, extensional fracturing, and dilational normal faulting. In this study, we employ two-dimensional discrete element models to assess both extensional fracturing and dilational normal faulting as mechanisms for forming pit craters. We also examine the effect of mechanical stratigraphy (alternating strong and weak layers) and variation in regolith thickness on pit morphology. Our simulations indicate that both extensional fracturing and dilational normal faulting are viable mechanisms. Both mechanisms lead to generally convex (steepening downward) slope profiles; extensional fracturing results in generally symmetric pits, whereas dilational normal faulting produces strongly asymmetric geometries. Pit width is established early, whereas pit depth increases later in the deformation history. Inclusion of mechanical stratigraphy results in wider and deeper pits, particularly for the dilational normal faulting, and the presence of strong near-surface layers leads to pits with distinct edges as observed on Mars. The modeling results suggest that a thicker regolith leads to wider but shallower pits that are less distinct and may be more difficult to detect in areas of thick regolith.

  19. Understanding interaction of small repeating earthquakes through models of rate-and-state faults

    Science.gov (United States)

    Chen, T.; Lui, K.; Lapusta, N.

    2012-12-01

    Due to their short recurrence times and known locations, small repeating earthquakes are widely used to study earthquake physics. Some of the repeating sequences are located close to each other and appear to interact. For example, the "San Francisco" (SF) and "Los Angeles" (LA) repeating sequences, which are targets of the San Andreas Fault Observatory at Depth (SAFOD), have a lateral separation of less than 70 m. The LA events tend to occur within 24 hours after the SF events, suggesting a triggering effect. Our goal is to study interaction of repeating earthquakes in the framework of rate-and-state fault models, in which repeating earthquakes occur on velocity-weakening patches embedded into a larger velocity-strengthening fault area. Such models can reproduce behavior of isolated repeating earthquake sequences, in particular, the scaling of their moment versus recurrence time and the response to accelerated postseismic creep (Chen and Lapusta, 2009; Chen et al., 2010). Our studies of the interaction of seismic events on two patches show that a variety of interesting behaviors. As expected based on intuition prior studies (e.g., Kato, JGR, 2004; Kaneko et al., Nature Geoscience, 2010), the two patches behave independently when they are far apart and rupture together if they are next to each other. In the intermediate range of distances, we observe triggering effects, with ruptures on the two patches clustering in time, but also other patterns, including supercycles that alternate between events that rupture a single asperity and events that rupture both asperities at the same time. When triggering occurs, smaller events tend to trigger larger events, since the nucleation of smaller events tends to be more frequent. To overcome such a pattern, and have larger events trigger smaller events as observed for the SF-LA interaction, the patch for the smaller event needs to be of the order of the nucleation size, so that the smaller event has difficulty nucleating by

  20. Predictive permeability model of faults in crystalline rocks; verification by joint hydraulic factor (JH) obtained from water pressure tests

    Indian Academy of Sciences (India)

    Hamidreza Rostami Barani; Gholamreza Lashkaripour; Mohammad Ghafoori

    2014-08-01

    In the present study, a new model is proposed to predict the permeability per fracture in the fault zones by a new parameter named joint hydraulic factor (JH). JH is obtained from Water Pressure Test WPT) and modified by the degree of fracturing. The results of JH correspond with quantitative fault zone descriptions, qualitative fracture, and fault rock properties. In this respect, a case study was done based on the data collected from Seyahoo dam site located in the east of Iran to provide the permeability prediction model of fault zone structures. Datasets including scan-lines, drill cores, and water pressure tests in the terrain of Andesite and Basalt rocks were used to analyse the variability of in-site relative permeability of a range from fault zones to host rocks. The rock mass joint permeability quality, therefore, is defined by the JH. JH data analysis showed that the background sub-zone had commonly > 3 Lu (less of 5 × 10−5 m3/s) per fracture, whereas the fault core had permeability characteristics nearly as low as the outer damage zone, represented by 8 Lu (1.3 × 10−4 m3/s) per fracture, with occasional peaks towards 12 Lu (2 × 10−4 m3/s) per fracture. The maximum JH value belongs to the inner damage zone, marginal to the fault core, with 14–22 Lu (2.3 × 10−4 –3.6 × 10−4 m3/s) per fracture, locally exceeding 25 Lu (4.1 × 10−4 m3/s) per fracture. This gives a proportional relationship for JH approximately 1:4:2 between the fault core, inner damage zone, and outer damage zone of extensional fault zones in crystalline rocks. The results of the verification exercise revealed that the new approach would be efficient and that the JH parameter is a reliable scale for the fracture permeability change. It can be concluded that using short duration hydraulic tests (WPTs) and fracture frequency (FF) to calculate the JH parameter provides a possibility to describe a complex situation and compare, discuss, and weigh the hydraulic quality to make

  1. Sampling theorem of Hermite type and aliasing error on the Sobolev class of functions

    Institute of Scientific and Technical Information of China (English)

    LI Hu-an; FANG Gen-sun

    2006-01-01

    Denote by B2σ,p (1<p<∞) the bandlimited class p-integrable functions whose Fourier transform is supported in the interval[-σ,σ].It is shown that a function in B2σ,p can be reconstructed in Lp(R) by its sampling sequences {f(kπ/σ)}k∈Z and {f'(kπ/σ)}k∈Z using the Hermite cardinal interpolation.Moreover,it will be shown that if f belongs to Lrp(R),1<p<∞,then the exact order of its aliasing error can be determined.

  2. Fault-Related CO2 Degassing, Geothermics, and Fluid Flow in Southern California Basins--Physiochemical Evidence and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Garven, Grant [Tufts University

    2015-08-11

    Our studies have had an important impact on societal issues. Experimental and field observations show that CO2 degassing, such as might occur from stored CO2 reservoir gas, can result in significant stable isotopic disequilibrium. In the offshore South Ellwood field of the Santa Barbara channel, we show how oil production has reduced natural seep rates in the area, thereby reducing greenhouse gases. Permeability is calculated to be ~20-30 millidarcys for km-scale fault-focused fluid flow, using changes in natural gas seepage rates from well production, and poroelastic changes in formation pore-water pressure. In the Los Angeles (LA) basin, our characterization of formation water chemistry, including stable isotopic studies, allows the distinction between deep and shallow formations waters. Our multiphase computational-based modeling of petroleum migration demonstrates the important role of major faults on geological-scale fluid migration in the LA basin, and show how petroleum was dammed up against the Newport-Inglewood fault zone in a “geologically fast” interval of time (less than 0.5 million years). Furthermore, these fluid studies also will allow evaluation of potential cross-formational mixing of formation fluids. Lastly, our new study of helium isotopes in the LA basin shows a significant leakage of mantle helium along the Newport Inglewood fault zone (NIFZ), at flow rates up to 2 cm/yr. Crustal-scale fault permeability (~60 microdarcys) and advective versus conductive heat transport rates have been estimated using the observed helium isotopic data. The NIFZ is an important deep-seated fault that may crosscut a proposed basin decollement fault in this heavily populated area, and appears to allow seepage of helium from the mantle sources about 30 km beneath Los Angeles. The helium study has been widely cited in recent weeks by the news media, both in radio and on numerous web sites.

  3. Radial velocity planets de-aliased. A new, short period for Super-Earth 55 Cnc e

    CERN Document Server

    Dawson, Rebekah I

    2010-01-01

    Radial velocity measurements of stellar reflex motion have revealed many extra-solar planets, but gaps in the observations produce aliases, spurious frequencies that are frequently confused with the planets' orbital frequencies. In the case of Gl 581d, the distinction between an alias and the true frequency was the distinction between a dead, frozen planet and a planet likely hospitable to life (Udry et al. 2007; Mayor et al. 2009). To improve the characterization of planetary systems, we describe how aliases originate and present a new approach for distinguishing between orbital frequencies and their aliases. Our approach harnesses features in the spectral window function to compare the amplitude and phase of predicted aliases with peaks present in the data. We apply it to confirm prior alias distinctions for the planets GJ 876d and HD 75898b. We find that the true periods of Gl 581c and HD 73526b/c remain ambiguous. We revise the periods of HD 156668b and 55 Cnc e, which were afflicted by daily aliases. For...

  4. Evaluation of chiller modeling approaches and their usability for fault detection

    Energy Technology Data Exchange (ETDEWEB)

    Sreedharan, Priya [Univ. of California, Berkeley, CA (United States)

    2001-05-01

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are the Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to

  5. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    Science.gov (United States)

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation.

  6. TSP断层模型数值模拟%Numerical Simulation of TSP Fault Model

    Institute of Scientific and Technical Information of China (English)

    林义; 刘争平; 王朝令; 肖缔

    2015-01-01

    隧道在施工开挖中会遇到各种地质问题,其中以断层和软弱带居多,目前隧道地质预报主要采用 TSP(tunnel seismic prediction)系统进行。虽然 TSP 技术应用广泛,但目前对它的研究工作主要集中于工程应用实例,采用正演模拟方法进行的研究较少。笔者采用有限元方法模拟隧道地震波场,采用波场快照与时间记录相结合的方法研究断层对隧道地震波场传播的影响,并对含断层模型的时间记录进行了反演处理,得到了数值模型的速度云图和反射层位图。数据处理结果表明:采用 TSP Win 软件默认值处理得到的速度云图与模型设定的断层位置一致;根据反射层位图,对异常速度带的层状模型来说,P 波预报的准确性更高。研究表明,TSP 系统具有良好的抗噪性能。通过对工程实例的处理,验证了数值模拟所得结论。%During tunnel excavation ,a variety of geological disasters might be encountered ,such as faults ,caves ,et .al . Tunnel seismic prediction (TSP ) is adopted to mitigate the possible damages . Although TSP technology is used widely ,the research about TSP is currently focused on its engineering application cases .We use the finite element method to simulate the tunnel seismic wave field ,employ wave field snapshots and time recording method on the impact of faults on the characteristics of the propagation of tunnel seismic wave field ,and inversely process the time record of model containing the fault .The digital model of the velocity scattered image and the reflection interface position are obtained , and the fault position from velocity scattered image processed with the default values set by using TSPwin is agreed to the one from the model .In respect to the layered model for an abnormal velocity zone ,P‐wave is more precise .The system of TSP is strong for its feature of anti‐noise .The numerical simulation is verified finally

  7. On-line Fault Diagnosis in Industrial Processes Using Variable Moving Window and Hidden Markov Model

    Institute of Scientific and Technical Information of China (English)

    周韶园; 谢磊; 王树青

    2005-01-01

    An integrated framework is presented to represent and classify process data for on-line identifying abnormal operating conditions. It is based on pattern recognition principles and consists of a feature extraction step, by which wavelet transform and principal component analysis are used to capture the inherent characteristics from process measurements, followed by a similarity assessment step using hidden Markov model (HMM) for pattern comparison. In most previous cases, a fixed-length moving window was employed to track dynamic data, and often failed to capture enough information for each fault and sometimes even deteriorated the diagnostic performance. A variable moving window, the length of which is modified with time, is introduced in this paper and case studies on the Tennessee Eastman process illustrate the potential of the proposed method.

  8. Fault-Tolerant Technique in the Cluster Computation of the Digital Watershed Model

    Institute of Scientific and Technical Information of China (English)

    SHANG Yizi; WU Baosheng; LI Tiejian; FANG Shenguang

    2007-01-01

    This paper describes a parallel computing platform using the existing facilities for the digital watershed model. In this paper, distributed multi-layered structure is applied to the computer cluster system, and the MPI-2 is adopted as a mature parallel programming standard. An agent is introduced which makes it possible to be multi-level fault-tolerant in software development. The communication protocol based on checkpointing and rollback recovery mechanism can realize the transaction reprocessing. Compared with conventional platform, the new system is able to make better use of the computing resource. Experimental results show the speedup ratio of the platform is almost 4 times as that of the conventional one, which demonstrates the high efficiency and good performance of the new approach.

  9. Analogue Modeling of Oblique Convergent Strike-Slip Faulting and Application to The Seram Island, Eastern Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2014-12-01

    Full Text Available DOI:10.17014/ijog.v1i3.189Sandbox experiment is one of the types of analogue modeling in geological sciences in which the main purpose is simulating deformation style and structural evolution of the sedimentary basin.  Sandbox modeling is one of the effective ways in conducting physically modeling and evaluates complex deformation of sedimentary rocks. The main purpose of this paper is to evaluate structural geometry and deformation history of oblique convergent deformation using of integrated technique of analogue sandbox modeling applying to deformation of Seram Fold-Thrust-Belt (SFTB in the Seram Island, Eastern Indonesia. Oblique convergent strike-slip deformation has notoriously generated area with structural complex geometry and pattern resulted from role of various local parameters that control stress distributions. Therefore, a special technique is needed for understanding and solving such problem in particular to relate 3D fault geometry and its evolution. The result of four case (Case 1 to 4 modeling setting indicated that two of modeling variables clearly affected in our sandbox modeling results; these are lithological variation (mainly stratigraphy of Seram Island and pre-existing basement fault geometry (basement configuration. Lithological variation was mainly affected in the total number of faults development.  On the other hand, pre-existing basement fault geometry was highly influenced in the end results particularly fault style and pattern as demonstrated in Case 4 modeling.  In addition, this study concluded that deformation in the Seram Island is clearly best described using oblique convergent strike-slip (transpression stress system.

  10. Analogue Modeling of Oblique Convergent Strike-Slip Faulting and Application to The Seram Island, Eastern Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2014-12-01

    Full Text Available DOI:10.17014/ijog.v1i3.189Sandbox experiment is one of the types of analogue modeling in geological sciences in which the main purpose is simulating deformation style and structural evolution of the sedimentary basin.  Sandbox modeling is one of the effective ways in conducting physically modeling and evaluates complex deformation of sedimentary rocks. The main purpose of this paper is to evaluate structural geometry and deformation history of oblique convergent deformation using of integrated technique of analogue sandbox modeling applying to deformation of Seram Fold-Thrust-Belt (SFTB in the Seram Island, Eastern Indonesia. Oblique convergent strike-slip deformation has notoriously generated area with structural complex geometry and pattern resulted from role of various local parameters that control stress distributions. Therefore, a special technique is needed for understanding and solving such problem in particular to relate 3D fault geometry and its evolution. The result of four case (Case 1 to 4 modeling setting indicated that two of modeling variables clearly affected in our sandbox modeling results; these are lithological variation (mainly stratigraphy of Seram Island and pre-existing basement fault geometry (basement configuration. Lithological variation was mainly affected in the total number of faults development.  On the other hand, pre-existing basement fault geometry was highly influenced in the end results particularly fault style and pattern as demonstrated in Case 4 modeling.  In addition, this study concluded that deformation in the Seram Island is clearly best described using oblique convergent strike-slip (transpression stress system.

  11. The seismogenic Gole Larghe Fault Zone (Italian Southern Alps): quantitative 3D characterization of the fault/fracture network, mapping of evidences of fluid-rock interaction, and modelling of the hydraulic structure through the seismic cycle

    Science.gov (United States)

    Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.

    2016-12-01

    The Gole Larghe Fault Zone (GLFZ) was exhumed from 8 km depth, where it was characterized by seismic activity (pseudotachylytes) and hydrous fluid flow (alteration halos and precipitation of hydrothermal minerals in veins and cataclasites). Thanks to glacier-polished outcrops exposing the 400 m-thick fault zone over a continuous area > 1.5 km2, the fault zone architecture has been quantitatively described with an unprecedented detail, providing a rich dataset to generate 3D Discrete Fracture Network (DFN) models and simulate the fault zone hydraulic properties. The fault and fracture network has been characterized combining > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed obtaining robust probability density functions for parameters of fault and fracture sets: orientation, fracture intensity and density, spacing, persistency, length, thickness/aperture, termination. The spatial distribution of fractures (random, clustered, anticlustered…) has been characterized with geostatistics. Evidences of fluid/rock interaction (alteration halos, hydrothermal veins, etc.) have been mapped on the same outcrops, revealing sectors of the fault zone strongly impacted, vs. completely unaffected, by fluid/rock interaction, separated by convolute infiltration fronts. Field and microstructural evidence revealed that higher permeability was obtained in the syn- to early post-seismic period, when fractures were (re)opened by off-fault deformation. We have developed a parametric hydraulic model of the GLFZ and calibrated it, varying the fraction of faults/fractures that were open in the post-seismic, with the goal of obtaining realistic fluid flow and permeability values, and a flow pattern consistent with the observed alteration/mineralization pattern. The fraction of open fractures is very close to the percolation threshold of the DFN, and the permeability tensor is strongly anisotropic

  12. Reliability Growth Modeling and Optimal Release Policy Under Fuzzy Environment of an N-version Programming System Incorporating the Effect of Fault Removal Efficiency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Failure of a safety critical system can lead to big losses. Very high software reliability is required for automating the working of systems such as aircraft controller and nuclear reactor controller software systems. Fault-tolerant softwares are used to increase the overall reliability of software systems. Fault tolerance is achieved using the fault-tolerant schemes such as fault recovery (recovery block scheme), fault masking (N-version programming (NVP)) or a combination of both (Hybrid scheme). These softwares incorporate the ability of system survival even on a failure. Many researchers in the field of software engineering have done excellent work to study the reliability of fault-tolerant systems. Most of them consider the stable system reliability. Few attempts have been made in reliability modeling to study the reliability growth for an NVP system. Recently, a model was proposed to analyze the reliability growth of an NVP system incorporating the effect of fault removal efficiency. In this model, a proportion of the number of failures is assumed to be a measure of fault generation while an appropriate measure of fault generation should be the proportion of faults removed. In this paper, we first propose a testing efficiency model incorporating the effect of imperfect fault debugging and error generation. Using this model, a software reliability growth model (SRGM) is developed to model the reliability growth of an NVP system. The proposed model is useful for practical applications and can provide the measures of debugging effectiveness and additional workload or skilled professional required. It is very important for a developer to determine the optimal release time of the software to improve its performance in terms of competition and cost. In this paper, we also formulate the optimal software release time problem for a 3VP system under fuzzy environment and discuss a the fuzzy optimization technique for solving the problem with a numerical illustration.

  13. Fault detection and diagnosis of induction motors using motor current signature analysis and a hybrid FMM-CART model.

    Science.gov (United States)

    Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan

    2012-01-01

    In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.

  14. Rough Faults, Distributed Weakening, and Off-Fault Deformation

    Science.gov (United States)

    Griffith, W. A.; Nielsen, S. B.; di Toro, G.; Smith, S. A.; Niemeijer, A. R.

    2009-12-01

    We report systematic spatial variations of fault rocks along non-planar strike-slip faults cross-cutting the Lake Edison Granodiorite, Sierra Nevada, California (Sierran Wavy Fault) and the Lobbia outcrops of the Adamello Batholith in the Italian Alps (Lobbia Wavy Fault). In the case of the Sierran fault, pseudotachylyte formed at contractional fault bends, where it is found as thin (1-2 mm) fault-parallel veins. Epidote and chlorite developed in the same seismic context as the pseudotachylyte and are especially abundant in extensional fault bends. We argue that the presence of fluids, as illustrated by this example, does not necessarily preclude the development of frictional melt. In the case of the Lobbia fault, pseudotachylyte is present in variable thickness along the length of the fault, but the pseudotachylyte veins thicken and pool in extensional bends. The Lobbia fault surface is self-affine, and we conduct a quantitative analysis of microcrack distribution, stress, and friction along the fault. Numerical modeling results show that opening in extensional bends and localized thermal weakening in contractional bends counteract resistance encountered by fault waviness, resulting in an overall weaker fault than suggested by the corresponding static friction coefficient. Models also predict stress redistribution around bends in the faults which mirror microcrack distributions, indicating significant elastic and anelastic strain energy is dissipated into the wall rocks due to non-planar fault geometry. Together these observations suggest that, along non-planar faults, damage and energy dissipation occurs along the entire fault during slip, rather than being confined to the region close to the crack tip as predicted by classical fracture mechanics.

  15. Aero-engine fault diagnosis applying new fast support vector algorithm

    Institute of Scientific and Technical Information of China (English)

    XU Qi-hua; GENG Shuai; SHI Jun

    2012-01-01

    A new fast learning algorithm was presented to solve the large-scale support vector machine ( SVM ) training problem of aero-engine fault diagnosis.The relative boundary vectors ( RBVs ) instead of all the original training samples were used for the training of the binary SVM fault classifiers.This pruning strategy decreased the number of final training sample significantly and can keep classification accuracy almost invariable.Accordingly , the training time was shortened to 1 / 20compared with basic SVM classifier.Meanwhile , owing to the reduction of support vector number , the classification time was also reduced.When sample aliasing existed , the aliasing sample points which were not of the same class were eliminated before the relative boundary vectors were computed.Besides , the samples near the relative boundary vectors were selected for SVM training in order to prevent the loss of some key sample points resulted from aliasing.This can improve classification accuracy effectively.A simulation example to classify 5classes of combination fault of aero-engine gas path components was finished and the total fault classification accuracy reached 96.1%.Simulation results show that this fast learning algorithm is effective , reliable and easy to be implemented for engineering application.

  16. Fault Tree Analysis: A survey of the state-of-the-art in modeling, analysis and tools

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Stoelinga, Mariëlle Ida Antoinette

    2014-01-01

    Fault tree analysis (FTA) is a very prominent method to analyze the risks related to safety and economically critical assets, like power plants, airplanes, data centers and web shops. FTA methods comprise of a wide variety of modelling and analysis techniques, supported by a wide range of software

  17. Fault tree analysis: A survey of the state-of-the-art in modeling, analysis and tools

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Stoelinga, Mariëlle Ida Antoinette

    2015-01-01

    Fault tree analysis (FTA) is a very prominent method to analyze the risks related to safety and economically critical assets, like power plants, airplanes, data centers and web shops. FTA methods comprise of a wide variety of modelling and analysis techniques, supported by a wide range of software

  18. Kinematic analysis and analogue modelling of the Passeier- and Jaufen faults: Implications for crustal indentation in the Eastern Alps

    NARCIS (Netherlands)

    Luth, S.; Willingshofer, E.; ter Borgh, M.; Sokoutis, D.; van Otterloo, J.; Versteeg, A.

    2013-01-01

    Crustal deformation in front of an indenter is often affected by the indenter's geometry, rheology, and motion path. In this context, the kinematics of the Jaufen- and Passeier faults have been studied by carrying out paleostress analysis in combination with crustal-scale analogue modelling to infer

  19. Identifying technical aliases in SELDI mass spectra of complex mixtures of proteins

    Science.gov (United States)

    2013-01-01

    Background Biomarker discovery datasets created using mass spectrum protein profiling of complex mixtures of proteins contain many peaks that represent the same protein with different charge states. Correlated variables such as these can confound the statistical analyses of proteomic data. Previously we developed an algorithm that clustered mass spectrum peaks that were biologically or technically correlated. Here we demonstrate an algorithm that clusters correlated technical aliases only. Results In this paper, we propose a preprocessing algorithm that can be used for grouping technical aliases in mass spectrometry protein profiling data. The stringency of the variance allowed for clustering is customizable, thereby affecting the number of peaks that are clustered. Subsequent analysis of the clusters, instead of individual peaks, helps reduce difficulties associated with technically-correlated data, and can aid more efficient biomarker identification. Conclusions This software can be used to pre-process and thereby decrease the complexity of protein profiling proteomics data, thus simplifying the subsequent analysis of biomarkers by decreasing the number of tests. The software is also a practical tool for identifying which features to investigate further by purification, identification and confirmation. PMID:24010718

  20. Extended reach OFDM-PON using super-Nyquist image induced aliasing.

    Science.gov (United States)

    Guo, Changjian; Liang, Jiawei; Liu, Jie; Liu, Liu

    2015-08-24

    We investigate a novel dispersion compensating technique in double sideband (DSB) modulated and directed-detected (DD) passive optical network (PON) systems using super-Nyquist image induced aliasing. We show that diversity is introduced to the higher frequency components by deliberate aliasing using the super-Nyquist images. We then propose to use fractional sampling and per-subcarrier maximum ratio combining (MRC) to harvest this diversity. We evaluate the performance of conventional orthogonal frequency division multiplexing (OFDM) signals along with discrete Fourier transform spread (DFT-S) OFDM and code-division multiplexing OFDM (CDM-OFDM) signals using the proposed scheme. The results show that the DFT-S OFDM signal has the best performance due to spectrum spreading and its superior peak-to-average power ratio (PAPR). By using the proposed scheme, the reach of a 10-GHz bandwidth QPSK modulated OFDM-PON can be extended to around 90 km. We also experimentally show that the achievable data rate of the OFDM signals can be effectively increased using the proposed scheme when adaptive bit loading is applied, depending on the transmission distance. A 10.5% and 5.2% increase in the achievable bit rate can be obtained for DSB modulated OFDM-PONs in 48.3-km and 83.2-km standard single mode fiber (SSMF) transmission cases, respectively, without any modification on the transmitter. A 40-Gb/s OFDM transmission over 83.2-km SSMF is successfully demonstrated.

  1. Geothermal modelling of faulted metamorphic crystalline crust: a new model of the Continental Deep Drilling Site KTB (Germany)

    Science.gov (United States)

    Szalaiová, Eva; Rabbel, Wolfgang; Marquart, Gabriele; Vogt, Christian

    2015-11-01

    The area of the 9.1-km-deep Continental Deep Drillhole (KTB) in Germany is used as a case study for a geothermal reservoir situated in folded and faulted metamorphic crystalline crust. The presented approach is based on the analysis of 3-D seismic reflection data combined with borehole data and hydrothermal numerical modelling. The KTB location exemplarily contains all elements that make seismic prospecting in crystalline environment often more difficult than in sedimentary units, basically complicated tectonics and fracturing and low-coherent strata. In a first step major rock units including two known nearly parallel fault zones are identified down to a depth of 12 km. These units form the basis of a gridded 3-D numerical model for investigating temperature and fluid flow. Conductive and advective heat transport takes place mainly in a metamorphic block composed of gneisses and metabasites that show considerable differences in thermal conductivity and heat production. Therefore, in a second step, the structure of this unit is investigated by seismic waveform modelling. The third step of interpretation consists of applying wavenumber filtering and log-Gabor-filtering for locating fractures. Since fracture networks are the major fluid pathways in the crystalline, we associate the fracture density distribution with distributions of relative porosity and permeability that can be calibrated by logging data and forward modelling of the temperature field. The resulting permeability distribution shows values between 10-16 and 10-19 m2 and does not correlate with particular rock units. Once thermohydraulic rock properties are attributed to the numerical model, the differential equations for heat and fluid transport in porous media are solved numerically based on a finite difference approach. The hydraulic potential caused by topography and a heat flux of 54 mW m-2 were applied as boundary conditions at the top and bottom of the model. Fluid flow is generally slow and

  2. Experimental verification of the model for formation of double Shockley stacking faults in highly doped regions of PVT-grown 4H–SiC wafers

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yu; Guo, Jianqiu; Goue, Ouloide; Raghothamachar, Balaji; Dudley, Michael; Chung, Gil; Sanchez, Edward; Quast, Jeff; Manning, Ian; Hansen, Darren

    2016-10-01

    Recently, we reported on the formation of overlapping rhombus-shaped stacking faults from scratches left over by the chemical mechanical polishing during high temperature annealing of PVT-grown 4H–SiC wafer. These stacking faults are restricted to regions with high N-doped areas of the wafer. The type of these stacking faults were determined to be Shockley stacking faults by analyzing the behavior of their area contrast using synchrotron white beam X-ray topography studies. A model was proposed to explain the formation mechanism of the rhombus shaped stacking faults based on double Shockley fault nucleation and propagation. In this paper, we have experimentally verified this model by characterizing the configuration of the bounding partials of the stacking faults on both surfaces using synchrotron topography in back reflection geometry. As predicted by the model, on both the Si and C faces, the leading partials bounding the rhombus-shaped stacking faults are 30° Si-core and the trailing partials are 30° C-core. Finally, using high resolution transmission electron microscopy, we have verified that the enclosed stacking fault is a double Shockley type.

  3. Predictive permeability model of faults in crystalline rocks; verification by joint hydraulic factor (JH) obtained from water pressure tests

    Science.gov (United States)

    Barani, Hamidreza Rostami; Lashkaripour, Gholamreza; Ghafoori, Mohammad

    2014-08-01

    In the present study, a new model is proposed to predict the permeability per fracture in the fault zones by a new parameter named joint hydraulic factor (JH). JH is obtained from Water Pressure Test (WPT) and modified by the degree of fracturing. The results of JH correspond with quantitative fault zone descriptions, qualitative fracture, and fault rock properties. In this respect, a case study was done based on the data collected from Seyahoo dam site located in the east of Iran to provide the permeability prediction model of fault zone structures. Datasets including scan-lines, drill cores, and water pressure tests in the terrain of Andesite and Basalt rocks were used to analyse the variability of in-site relative permeability of a range from fault zones to host rocks. The rock mass joint permeability quality, therefore, is defined by the JH. JH data analysis showed that the background sub-zone had commonly core had permeability characteristics nearly as low as the outer damage zone, represented by 8 Lu (1.3 ×10-4 m 3/s) per fracture, with occasional peaks towards 12 Lu (2 ×10-4 m 3/s) per fracture. The maximum JH value belongs to the inner damage zone, marginal to the fault core, with 14-22 Lu (2.3 ×10-4-3.6 ×10-4 m 3/s) per fracture, locally exceeding 25 Lu (4.1 ×10-4 m 3/s) per fracture. This gives a proportional relationship for JH approximately 1:4:2 between the fault core, inner damage zone, and outer damage zone of extensional fault zones in crystalline rocks. The results of the verification exercise revealed that the new approach would be efficient and that the JH parameter is a reliable scale for the fracture permeability change. It can be concluded that using short duration hydraulic tests (WPTs) and fracture frequency (FF) to calculate the JH parameter provides a possibility to describe a complex situation and compare, discuss, and weigh the hydraulic quality to make predictions as to the permeability models and permeation amounts of different

  4. Predictive model of San Andreas fault system paleogeography, Late Cretaceous to early Miocene, derived from detailed multidisciplinary conglomerate correlations

    Science.gov (United States)

    Burnham, Kathleen

    2009-01-01

    shapes of tectonic blocks in a paleogeographic resolution synthesizing both the proposed 315 and ~ 30 km San Andreas fault offsets, as well as honoring the lithologic correlation of Anchor Bay with Eagle Rest peak. The model has proved predictive. Since its first introduction, in April and June 1998, other authors have reported seven subsequently identified correlative pairs of geological and geophysical features consistent with it. These required both lateral and temporal expansion of the model: The paleogeography now incorporates 58 pairs of correlative features, covers the period from 70 to 21.3 Ma, and extends from Pelona and Orocopia to the Mendocino triple junction. The model supports the view that fault is not the boundary between the Pacific and North American plates is a wide zone encompassing the San Andreas fault system. However, the model suggests the San Andreas fault is a temporary assemblage of separate segments having different motions, and is neither the primary plate boundary, nor the dominant fault of the San Andreas fault system. By improving resolution of complex spatial-temporal distribution of slip along this evolving tectonic margin, the model provides a firmer foundation for resolution of geophysical issues such as slab window and Pacific Plate subduction models.

  5. The effects of pre-existing discontinuities on the surface expression of normal faults: Insights from wet-clay analog modeling

    Science.gov (United States)

    Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Burrato, Pierfrancesco; Seno, Silvio; Valensise, Gianluca

    2016-08-01

    We use wet-clay analog models to investigate how pre-existing discontinuities (i.e. structures inherited from previous tectonic phases) affect the evolution of a normal fault at the Earth's surface. To this end we first perform a series of three reference experiments driven by a 45° dipping master fault unaffected by pre-existing discontinuities to generate a mechanically isotropic learning set of models. We then replicate the experiment six times introducing a 60°-dipping precut in the clay cake, each time with a different attitude and orientation with respect to an initially-blind, 45°-dipping, master normal fault. In all experiments the precut intersects the vertical projection of the master fault halfway between the center and the right-hand lateral tip. All other conditions are identical for all seven models. By comparing the results obtained from the mechanically isotropic experiments with results from experiments with precuts we find that the surface evolution of the normal fault varies depending on the precut orientation. In most cases the parameters of newly-forming faults are strongly influenced. The largest influence is exerted by synthetic and antithetic discontinuities trending respectively at 30° and 45° from the strike of the master fault, whereas a synthetic discontinuity at 60° and an antithetic discontinuity at 30° show moderate influence. Little influence is exerted by a synthetic discontinuity at 45° and an antithetic discontinuity at 60° from the strike of the master fault. We provide a ranking chart to assess fault-to-discontinuity interactions with respect to essential surface fault descriptors, such as segmentation, vertical-displacement profile, maximum displacement, and length, often used as proxies to infer fault properties at depth. Considering a single descriptor, the amount of deviation induced by different precuts varies from case to case in a rather unpredictable fashion. Multiple observables should be taken into

  6. Episodic slow slip events in a non-planar subduction fault model for northern Cascadia

    Science.gov (United States)

    Li, D.; Liu, Y.; Matsuzawa, T.; Shibazaki, B.

    2014-12-01

    Episodic tremor and slow slip (ETS) events have been detected along the Cascadia margin, as well as many other subduction zones, by increasingly dense seismic and geodetic networks over the past decade. In northern Cascadia, ETS events arise on the thrust fault interface of 30~50 km depth, coincident with metamorphic dehydration of the subducting oceanic slab around temperatures of 350. Previous numerical simulations (e.g., Liu and Rice 2007) suggested that near-lithostatic pore pressure in the rate-state friction stability transition zone could give rise to slow slip events (SSE) down-dip of the seismogenic zone, which provides a plausible physical mechanism for these phenomena. Here we present a 3-D numerical simulation of inter-seismic SSEs based on the rate- and state- friction law, incorporating a non-planar, realistic northern Cascadia slab geometry compiled by McCrory et al. (2012) using triangular dislocation elements. Preliminary results show that the width and pore pressure level of the transition zone can remarkably affect the recurrence of SSEs. With effective normal stress of ~1-2 MPa and characteristic slip distance of ~1.4 mm, inter-seismic SSEs can arise about every year. The duration of each event is about 2~3 weeks, with the propagating speed along strike in the range of km/day. Furthermore, the slab bending beneath southern Vancouver Island and northern Washington State appears to accelerate the along-strike propagation of SSEs. Our next step is to constrain the rate-state frictional properties using geodetic inversion of SSE slip and inter-SSE plate coupling from the Plate Boundary Observatory (PBO) GPS measurements. Incorporating the realistic fault geometry into a physics model constrained by geodetic data will enable us to transition from a conceptual towards a quantitative and predictive understanding of SSEs mechanism.

  7. Fault Diagnosis and Detection in Industrial Motor Network Environment Using Knowledge-Level Modelling Technique

    Directory of Open Access Journals (Sweden)

    Saud Altaf

    2017-01-01

    Full Text Available In this paper, broken rotor bar (BRB fault is investigated by utilizing the Motor Current Signature Analysis (MCSA method. In industrial environment, induction motor is very symmetrical, and it may have obvious electrical signal components at different fault frequencies due to their manufacturing errors, inappropriate motor installation, and other influencing factors. The misalignment experiments revealed that improper motor installation could lead to an unexpected frequency peak, which will affect the motor fault diagnosis process. Furthermore, manufacturing and operating noisy environment could also disturb the motor fault diagnosis process. This paper presents efficient supervised Artificial Neural Network (ANN learning technique that is able to identify fault type when situation of diagnosis is uncertain. Significant features are taken out from the electric current which are based on the different frequency points and associated amplitude values with fault type. The simulation results showed that the proposed technique was able to diagnose the target fault type. The ANN architecture worked well with selecting of significant number of feature data sets. It seemed that, to the results, accuracy in fault detection with features vector has been achieved through classification performance and confusion error percentage is acceptable between healthy and faulty condition of motor.

  8. Fault Slip Model of 2013 Lushan Earthquake Retrieved Based on GPS Coseismic Displacements

    Institute of Scientific and Technical Information of China (English)

    Mengkui Li; Shuangxi Zhang; Chaoyu Zhang; Yu Zhang

    2015-01-01

    Lushan Earthquake (~Mw 6.6) occurred in Sichuan Province of China on 20 April 2013, was the largest earthquake in Longmenshan fault belt since 2008 Wenchuan Earthquake. To better understand its rupture pattern, we focused on the influences of fault parameters on fault slips and performed fault slip inversion using Akaike’s Bayesian Information Criterion (ABIC) method. Based on GPS coseismic data, our inverted results showed that the fault slip was mainly confined at depths. The maximum slip amplitude is about 0.7 m, and the scalar seismic moment is about 9.47×1018 N·m. Slip pattern reveals that the earthquake occurred on the thrust fault with large dip-slip and small strike-slip, such a simple fault slip represents no second sub-event occurred. The Coulomb stress changes (DCFF) matched the most aftershocks with negative anomalies. The in-verted results demonstrated that the source parameters have significant impacts on fault slip distri-bution, especially on the slip direction and maximum displacement.

  9. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2015-01-01

    to describe the system over a frequency range. Two methods based on the Kalman Filter and Statistical Change Detection techniques are proposed for detecting degradation faults and component failures, respectively. Finally, a reference correction setup is used to compensate for degradation faults....

  10. Variogram Identification Aided by a Structural Framework for Improved Geometric Modeling of Faulted Reservoirs: Jeffara Basin, Southeastern Tunisia

    Energy Technology Data Exchange (ETDEWEB)

    Chihi, Hayet, E-mail: hayet_chihi@yahoo.fr; Bedir, Mourad [University of Cartage, Georesources Laboratory, Centre for Water Researches and Technologies (Tunisia); Belayouni, Habib [University of Tunis El Manar, Department of Geology, Faculty of Sciences of Tunis (Tunisia)

    2013-06-15

    This article describes a proposed work-sequence to generate accurate reservoir-architecture models, describing the geometry of bounding surfaces (i.e., fault locations and extents), of a structurally complex geologic setting in the Jeffara Basin (South East Tunisia) by means of geostatistical modeling. This uses the variogram as the main tool to measure the spatial variability of the studied geologic medium before making any estimation or simulation. However, it is not always easy to fit complex experimental variograms to theoretical models. Thus, our primary purpose was to establish a relationship between the geology and the components of the variograms to fit a mathematically consistent and geologically interpretable variogram model for improved predictions of surface geometries. We used a three-step approach based on available well data and seismic information. First, we determined the structural framework: a seismo-tectonic data analysis was carried out, and we showed that the study area is cut mainly by NW-SE-trending normal faults, which were classified according to geometric criteria (strike, throw magnitude, dip, and dip direction). We showed that these normal faults are at the origin of a large-scale trend structure (surfaces tilted toward the north-east). At a smaller scale, the normal faults create a distinct compartmentalization of the reservoirs. Then, a model of the reservoir system architecture was built by geostatistical methods. An efficient methodology was developed, to estimate the bounding faulted surfaces of the reservoir units. Emphasis was placed on (i) elaborating a methodology for variogram interpretation and modeling, whereby the importance of each variogram component is assessed in terms of probably geologic factor controlling the behavior of each structure; (ii) integrating the relevant fault characteristics, which were deduced from the previous fault classification analysis, as constraints in the kriging estimation of bounding surfaces

  11. Fault slip-rates derived from modeling of on-shore marine terraces in the western Corinth Gulf

    Science.gov (United States)

    Palyvos, Nikos; De Martini, Paolo Marco; Mancini, Marco; Pantosti, Daniela

    2013-04-01

    Data available for estimating fault slip-rates from accurate modeling of uplifted on-shore marine terraces although limited are all derivatives of the research activity performed from 2004 to 2007 within the 3HAZ-Corinth project. We concentrated our efforts in the Aravonitsa area where we had a nicely preserved staircase of marine terraces. In particular, we recognized and mapped in detail all the marine terraces by qualitative DEM analysis, airphoto interpretation and field survey and we adopted a forward modeling procedure to fit the data. The modeling approach we used in this work does not take into account any effect related to sedimentation, compaction and erosion nor any interseismic adjustment and thus, being purely based on coseismic deformation, the obtained results should be considered as maxima. In the study area we were able to recognize several surfaces related to sea-level still stands. Their areal distribution and elevation are strongly influenced by past intense erosion on the underlying weakly consolidated sediments, and by the activity of secondary faults at the footwall of the Neos Erineos Fault, being part of the Lambiri - Neos Erineos - Aigion Fault zone (LANEfz). The Neos Erineos Fault has been studied and investigated in details and it appears as one of the main N-dipping normal faults bounding the southern shore of the Corinth Gulf and taking on part of the observed N-S striking extension. U/Th-series age dates and nannoplankton analyses, performed on selected samples collected at different heights on the studied surfaces, allowed us to to reconstruct an almost complete and chronologically well constrained transect of uplifted marine terraces belonging to the Late Quaternary (as old as 350 ka). A tentative correlation with marine isotopic stages (MIS) and specifically with main highstands from the Late Quaternary eustatic sea-level curve was attempted in order to calculate footwall uplift rate for the Neos Erineos Fault. The calculated

  12. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2011-01-11

    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  13. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  14. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    Science.gov (United States)

    Patterson, Jonathan D.; Johnson, Stephen B.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  15. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes the challe...

  16. Virtual prototype and experimental research on gear multi-fault diagnosis using wavelet-autoregressive model and principal component analysis method

    Science.gov (United States)

    Li, Zhixiong; Yan, Xinping; Yuan, Chengqing; Peng, Zhongxiao; Li, Li

    2011-10-01

    Gear systems are an essential element widely used in a variety of industrial applications. Since approximately 80% of the breakdowns in transmission machinery are caused by gear failure, the efficiency of early fault detection and accurate fault diagnosis are therefore critical to normal machinery operations. Reviewed literature indicates that only limited research has considered the gear multi-fault diagnosis, especially for single, coupled distributed and localized faults. Through virtual prototype simulation analysis and experimental study, a novel method for gear multi-fault diagnosis has been presented in this paper. This new method was developed based on the integration of Wavelet transform (WT) technique, Autoregressive (AR) model and Principal Component Analysis (PCA) for fault detection. The WT method was used in the study as the de-noising technique for processing raw vibration signals. Compared with the noise removing method based on the time synchronous average (TSA), the WT technique can be performed directly on the raw vibration signals without the need to calculate any ensemble average of the tested gear vibration signals. More importantly, the WT can deal with coupled faults of a gear pair in one operation while the TSA must be carried out several times for multiple fault detection. The analysis results of the virtual prototype simulation prove that the proposed method is a more time efficient and effective way to detect coupled fault than TSA, and the fault classification rate is superior to the TSA based approaches. In the experimental tests, the proposed method was compared with the Mahalanobis distance approach. However, the latter turns out to be inefficient for the gear multi-fault diagnosis. Its defect detection rate is below 60%, which is much less than that of the proposed method. Furthermore, the ability of the AR model to cope with localized as well as distributed gear faults is verified by both the virtual prototype simulation and

  17. Fault plane modelling of the 2003 August 14 Lefkada Island (Greece) earthquake based on the analysis of ENVISAT SAR interferograms

    Science.gov (United States)

    Ilieva, M.; Briole, P.; Ganas, A.; Dimitrov, D.; Elias, P.; Mouratidis, A.; Charara, R.

    2016-12-01

    On 2003 August 14, a Mw = 6.2 earthquake occurred offshore the Lefkada Island in the eastern Ionian Sea, one of the most seismically active areas in Europe. The earthquake caused extended damages in the island, and a number of ground failures, especially along the north-western coast. Seven ascending ENVISAT/ASAR images are used to process six co-seismic interferograms. The ROI-PAC package is used for interferogram generation with the SRTM DEM applied in a two-pass method. The formation of the co-seismic pairs is limited due to the existence of one pre-seismic image only. Dense vegetation is covering the island, which is an obstacle in getting good coherence, since C-band images are used. Nevertheless, ground deformation, of > 56 mm (two fringes) in the line of sight of the satellite, is detected in all six co-seismic interferograms. By inversion of the data from the observed fringes, a best fitting model of the activated fault is calculated assuming a dislocation in an elastic half space. The inferred fault is a pure dextral strike-slip fault, dipping 59 ± 5° eastward, 16 ± 2 km long and 10 ± 2 km wide. It is located north of the fault of the Mw = 6.5 2015 November 17 earthquake, and a 10-15 km gap remains between the two faults. The 2003 fault does not reach the surface and its upper edge is at a depth of 3.5 ± 1 km. No evidence is found of slip south of the Lefkada Island as suggested by some seismological studies.

  18. Earthquake behavior of variable rupture- scale on active faults and application of the cascade-rupturing model

    Institute of Scientific and Technical Information of China (English)

    闻学泽

    2001-01-01

    This study reveals preliminarily the earthquake behavior of variable rupture-scale on active faults of the Chinese mainland, that is that on an individual fault portion earthquake¢s rupture-scale varies cycle to cycle, and hence earthquake¢s strength changes with time. The tendency of this variation has no necessity. On defining relative size of rupture scales, a statistical result shows that it is of the lowest probability that ruptures with the same scale occur in two successive cycles. While the rupture¢s scale in the preceding cycle is 2small2, the probability of the follow-ing rupture¢s scale being 2large2 is as many as 0.48. While the rupture¢s scale in the preceding cycle is 2middle2, the probability of the succeeding rupture being 2small2 or 2large2 scale is 0.69 or 0.25. While the rupture¢s scale in the preceding cycle is 2large2, the probability must be zero for the following rupture with 2large2 scale, and is 0.36 or 0.64 for the following rupture with 2small2 or 2middle2 scale. The author introduces and improves the cascade-rupturing model, and uses it to describe the variability and complexity of rupture scale on individual fault portions. Basic features of some active strike-slip faults on which cascade ruptures have occurred are summarized. Basing on these features the author proposes principles of cascade-rupture segmentation for this type of faults. As an ex-ample to application, the author segments one portion of the Anninghe fault zone, western Sichuan, for its future cascade rupture, and further assesses the probable strength and its corresponding probability of the coming earth-quake.

  19. Dynamic Model and Fault Feature Research of Dual-Rotor System with Bearing Pedestal Looseness

    Directory of Open Access Journals (Sweden)

    Nanfei Wang

    2016-01-01

    Full Text Available The paper presents a finite element model of dual-rotor system with pedestal looseness stemming from loosened bolts. Dynamic model including bearing pedestal looseness is established based on the dual-rotor test rig. Three-degree-of-freedom (DOF planar rigid motion of loose bearing pedestal is fully considered and collision recovery coefficient is also introduced in the model. Based on the Timoshenko beam elements, using the finite element method, rigid body kinematics, and the Newmark-β algorithm for numerical simulation, dynamic characteristics of the inner and outer rotors and the bearing pedestal plane rigid body motion under bearing pedestal looseness condition are studied. Meanwhile, the looseness experiments under two different speed combinations are carried out, and the experimental results are basically the same. The simulation results are compared with the experimental results, indicating that vibration displacement waveforms of loosened rotor have “clipping” phenomenon. When the bearing pedestal looseness fault occurs, the inner and outer rotors vibration spectrum not only contains the difference and sum frequency of the two rotors’ fundamental frequency but also contains 2X and 3X component of rotor with loosened support, and so forth; low frequency spectrum is more, containing dividing component, and so forth; the rotor displacement spectrums also contain fewer combination frequency components, and so forth; when one side of the inner rotor bearing pedestal is loosened, the inner rotor axis trajectory is drawn into similar-ellipse shape.

  20. Development of a GIA (Glacial Isostatic Adjustment) - Fault Model of Greenland

    Science.gov (United States)

    Steffen, R.; Lund, B.

    2015-12-01

    The increase in sea level due to climate change is an intensely discussed phenomenon, while less attention is being paid to the change in earthquake activity that may accompany disappearing ice masses. The melting of the Greenland Ice Sheet, for example, induces changes in the crustal stress field, which could result in the activation of existing faults and the generation of destructive earthquakes. Such glacially induced earthquakes are known to have occurred in Fennoscandia 10,000 years ago. Within a new project ("Glacially induced earthquakes in Greenland", start in October 2015), we will analyse the potential for glacially induced earthquakes in Greenland due to the ongoing melting. The objectives include the development of a three-dimensional (3D) subsurface model of Greenland, which is based on geologic, geophysical and geodetic datasets, and which also fulfils the boundary conditions of glacial isostatic adjustment (GIA) modelling. Here we will present an overview of the project, including the most recently available datasets and the methodologies needed for model construction and the simulation of GIA induced earthquakes.

  1. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  2. Predictive modelling of fault related fracturing in carbonate damage-zones: analytical and numerical models of field data (Central Apennines, Italy)

    Science.gov (United States)

    Mannino, Irene; Cianfarra, Paola; Salvini, Francesco

    2010-05-01

    Permeability in carbonates is strongly influenced by the presence of brittle deformation patterns, i.e pressure-solution surfaces, extensional fractures, and faults. Carbonate rocks achieve fracturing both during diagenesis and tectonic processes. Attitude, spatial distribution and connectivity of brittle deformation features rule the secondary permeability of carbonatic rocks and therefore the accumulation and the pathway of deep fluids (ground-water, hydrocarbon). This is particularly true in fault zones, where the damage zone and the fault core show different hydraulic properties from the pristine rock as well as between them. To improve the knowledge of fault architecture and faults hydraulic properties we study the brittle deformation patterns related to fault kinematics in carbonate successions. In particular we focussed on the damage-zone fracturing evolution. Fieldwork was performed in Meso-Cenozoic carbonate units of the Latium-Abruzzi Platform, Central Apennines, Italy. These units represent field analogues of rock reservoir in the Southern Apennines. We combine the study of rock physical characteristics of 22 faults and quantitative analyses of brittle deformation for the same faults, including bedding attitudes, fracturing type, attitudes, and spatial intensity distribution by using the dimension/spacing ratio, namely H/S ratio where H is the dimension of the fracture and S is the spacing between two analogous fractures of the same set. Statistical analyses of structural data (stereonets, contouring and H/S transect) were performed to infer a focussed, general algorithm that describes the expected intensity of fracturing process. The analytical model was fit to field measurements by a Montecarlo-convergent approach. This method proved a useful tool to quantify complex relations with a high number of variables. It creates a large sequence of possible solution parameters and results are compared with field data. For each item an error mean value is

  3. Ground-motion modeling of Hayward fault scenario earthquakes, part I: Construction of the suite of scenarios

    Science.gov (United States)

    Aagaard, Brad T.; Graves, Robert W.; Schwartz, David P.; Ponce, David A.; Graymer, Russell W.

    2010-01-01

    We construct kinematic earthquake rupture models for a suite of 39 Mw 6.6-7.2 scenario earthquakes involving the Hayward, Calaveras, and Rodgers Creek faults. We use these rupture models in 3D ground-motion simulations as discussed in Part II (Aagaard et al., 2010) to provide detailed estimates of the shaking for each scenario. We employ both geophysical constraints and empirical relations to provide realistic variation in the rupture dimensions, slip heterogeneity, hypocenters, rupture speeds, and rise times. The five rupture lengths include portions of the Hayward fault as well as combined rupture of the Hayward and Rodgers Creek faults and the Hayward and Calaveras faults. We vary rupture directivity using multiple hypocenters, typically three per rupture length, yielding north-to-south rupture, bilateral rupture, and south-to-north rupture. For each rupture length and hypocenter, we consider multiple random distributions of slip. We use two approaches to account for how aseismic creep might reduce coseismic slip. For one subset of scenarios, we follow the slip-predictable approach and reduce the nominal slip in creeping regions according to the creep rate and time since the most recent earthquake, whereas for another subset of scenarios we apply a vertical gradient to the nominal slip in creeping regions. The rupture models include local variations in rupture speed and use a ray-tracing algorithm to propagate the rupture front. Although we are not attempting to simulate the 1868 Hayward fault earthquake in detail, a few of the scenarios are designed to have source parameters that might be similar to this historical event.

  4. Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.

  5. Application of the Duffing Chaotic Oscillator Model for Early Fault Diagnosis- Ⅰ.Basic Theory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper, the well-known Duf fing equation and the nonlinear equation describing vibration of the human eardrum are introduced from elastic nonlinear system theory. According to the fact that the human ear can distinguish weak sound with smalldifference, the idea that the Duffing oscillator can be used to detect a weak signal and diagnose early fault of machinery is proposed. In order to obtain a model for weak signaldetection via the Duffing oscillator, the first step is to seek all forms of solutions of theDu f fing equation. The second step is to study global bifurcations of the Du f fing equationusing qualitative analysis theory of a dynamic system. That is to say, a series of bifurcations thresholds of the Duffing equation can be analyzed by the Melnikov function and asubharmonics Melnikov function. Then the three types of bifurcations thresholds varyingwith damping and external exciting amplitude are discussed. The analysis concludes thatthe bifurcation threshold corresponding to the maximum orbit of solutions outside the homoclinic orbit of the Duffing equation can be used to detect a weak signal. Finally, the implementing model of the Du ffing oscillator for weak signal detection is given.

  6. Functional Fault Model Development Process to Support Design Analysis and Operational Assessment

    Science.gov (United States)

    Melcher, Kevin J.; Maul, William A.; Hemminger, Joseph A.

    2016-01-01

    A functional fault model (FFM) is an abstract representation of the failure space of a given system. As such, it simulates the propagation of failure effects along paths between the origin of the system failure modes and points within the system capable of observing the failure effects. As a result, FFMs may be used to diagnose the presence of failures in the modeled system. FFMs necessarily contain a significant amount of information about the design, operations, and failure modes and effects. One of the important benefits of FFMs is that they may be qualitative, rather than quantitative and, as a result, may be implemented early in the design process when there is more potential to positively impact the system design. FFMs may therefore be developed and matured throughout the monitored system's design process and may subsequently be used to provide real-time diagnostic assessments that support system operations. This paper provides an overview of a generalized NASA process that is being used to develop and apply FFMs. FFM technology has been evolving for more than 25 years. The FFM development process presented in this paper was refined during NASA's Ares I, Space Launch System, and Ground Systems Development and Operations programs (i.e., from about 2007 to the present). Process refinement took place as new modeling, analysis, and verification tools were created to enhance FFM capabilities. In this paper, standard elements of a model development process (i.e., knowledge acquisition, conceptual design, implementation & verification, and application) are described within the context of FFMs. Further, newer tools and analytical capabilities that may benefit the broader systems engineering process are identified and briefly described. The discussion is intended as a high-level guide for future FFM modelers.

  7. Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models

    Science.gov (United States)

    Styron, R. H.; Hetland, E. A.

    2014-12-01

    Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on

  8. Low order anti-aliasing filters for sparse signals in embedded applications

    Indian Academy of Sciences (India)

    J V Satyanarayana; A G Ramakrishnan

    2013-06-01

    Major emphasis, in compressed sensing (CS) research, has been on the acquisition of sub-Nyquist number of samples of a signal that has a sparse representation on some tight frame or an orthogonal basis, and subsequent reconstruction of the original signal using a plethora of recovery algorithms. In this paper, we present compressed sensing data acquisition from a different perspective, wherein a set of signals are reconstructed at a sampling rate which is a multiple of the sampling rate of the ADCs that are used to measure the signals. We illustrate how this can facilitate usage of anti-aliasing filters with relaxed frequency specifications and, consequently, of lower order.

  9. Anti-aliasing in Aircraft Cockpit Display System Based on Modified Bresenham Algorithm and Virtual Technology

    Directory of Open Access Journals (Sweden)

    Dan Sun

    2014-06-01

    Full Text Available In this paper, an improved Bresenham algorithm is proposed in order to improve the display effect of the digital instrument display systems in aircraft and aviation simulators with following the ARINC 661 specification. According to the algorithm, the pixel brightness is calculated according to the proportional relation of the distance to the pixel for realizing the anti-aliasing. In Combine with areal sampled and double buffer image processing technology, the idea can increase the operation efficiency compared with the traditional method. In accordance with the analysis of the ARINC 661, the air data system instrument is implemented in the VAPS. Experimental results reveal that the improved algorithm and digital image processing technology can indeed solve display distortion problems more effectively and accurately, the display effect is improved obviously. The implemented schemes can achieve the airborne electronic display system on the high performance and satisfy aircraft airworthiness requirements and standards

  10. Avoiding Aliasing in Allan Variance: an Application to Fiber Link Data Analysis

    CERN Document Server

    Calosso, Claudio E; Micalizio, Salvatore

    2015-01-01

    Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284 km coherent optical link for frequency dissemination, which we realized in Italy.

  11. 78 FR 69927 - In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases...

    Science.gov (United States)

    2013-11-21

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF STATE In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based...

  12. Drag—An EXCEL visual basic program for modeling fault drag using cubic splines and calculation of minimum dip and strike separation

    Science.gov (United States)

    Ozkaya, S. I.; Mattner, J.

    1996-06-01

    An EXCEL visual basic program is presented for modeling fault drag using cubic splines. The objective of the program is to estimate minimum dip and strike separation using dip measurements in the vicinity of a fault. The program is useful especially for estimating stratigraphic separation in the subsurface environment where only limited structural information is available from dipmeter logs. A modified cubic spline curve fitting procedure is used to model bedding trace within the fault drag zone. The solution procedure is based on the assumption that the dip angle is the same at equal distances away from the fault trace on a cross-section or map projection within the fault drag zone on the same side of the fault. On a cross-section perpendicular to the strike of a fault, the distance between the points of intersection of the fault trace with dragged bed and projection of the undisturbed bed gives half of the minimum dip separation. On a map projection, this distance is equal to half of the strike separation.

  13. Bayesian updating in a fault tree model for shipwreck risk assessment.

    Science.gov (United States)

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M

    2017-03-14

    Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made.

  14. A new formalism that combines advantages of fault-trees and Markov models: Boolean logic driven Markov processes

    Energy Technology Data Exchange (ETDEWEB)

    Bouissou, Marc; Bon, Jean-Louis

    2003-11-01

    This paper introduces a modeling formalism that enables the analyst to combine concepts inherited from fault trees and Markov models in a new way. We call this formalism Boolean logic Driven Markov Processes (BDMP). It has two advantages over conventional models used in dependability assessment: it allows the definition of complex dynamic models while remaining nearly as readable and easy to build as fault-trees, and it offers interesting mathematical properties, which enable an efficient processing for BDMP that are equivalent to Markov processes with huge state spaces. We give a mathematical definition of BDMP, the demonstration of their properties, and several examples to illustrate how powerful and easy to use they are. From a mathematical point of view, a BDMP is nothing more than a certain way to define a global Markov process, as the result of several elementary processes which can interact in a given manner. An extreme case is when the processes are independent. Then we simply have a fault-tree, the leaves of which are associated to independent Markov processes.

  15. Preliminary simulation of a M6.5 earthquake on the Seattle Fault using 3D finite-difference modeling

    Science.gov (United States)

    Stephenson, William J.; Frankel, Arthur D.

    2000-01-01

    A three-dimensional finite-difference simulation of a moderate-sized (M 6.5) thrust-faulting earthquake on the Seattle fault demonstrates the effects of the Seattle Basin on strong ground motion in the Puget lowland. The model area includes the cities of Seattle, Bremerton and Bellevue. We use a recently developed detailed 3D-velocity model of the Seattle Basin in these simulations. The model extended to 20-km depth and assumed rupture on a finite fault with random slip distribution. Preliminary results from simulations of frequencies 0.5 Hz and lower suggest amplification can occur at the surface of the Seattle Basin by the trapping of energy in the Quaternary sediments. Surface waves generated within the basin appear to contribute to amplification throughout the modeled region. Several factors apparently contribute to large ground motions in downtown Seattle: (1) radiation pattern and directivity from the rupture; (2) amplification and energy trapping within the Quaternary sediments; and (3) basin geometry and variation in depth of both Quaternary and Tertiary sediments

  16. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  17. Modeling of fault activation and seismicity by injection directly into a fault zone associated with hydraulic fracturing of shale-gas reservoirs

    Science.gov (United States)

    LBNL, in consultation with the EPA, expanded upon a previous study by injecting directly into a 3D representation of a hypothetical fault zone located in the geologic units between the shale-gas reservoir and the drinking water aquifer.

  18. Investigating fault coupling: Creep and microseismicity on the Hayward fault

    Science.gov (United States)

    Evans, E. L.; Loveless, J. P.; Meade, B. J.; Burgmann, R.

    2009-12-01

    We seek to quantify the relationship between interseismic slip activity and microseismicity along the Hayward fault in the eastern San Francisco Bay Area. During the interseismic regime the Hayward fault is known to exhibit variable degrees of locking both along strike and down-dip. Background microseismicity on and near the fault has been suggested to provide independent information about the rates of interseismic creep and the boundaries of creeping regions. In particular, repeating earthquakes within the fault zone have been suggested as a proxy for fault creep rates. To investigate this relationship, we invert GPS data for microplate rotations, fault slip rates, and fault coupling using a block model that spans western United States and includes the San Andreas, Hayward, Calaveras, Rogers Creek, and Green Valley faults in the greater Bay area. The tectonic context provided by the regional scale model ensures that the slip budget across Bay Area faults is consistent with large scale tectonic motions and kinematically connected to the central San Andreas fault. We image the spatial distribution of interseismic slip on a triangulated mesh of the Hayward fault and compare the distribution of interseismic fault coupling with the number of earthquakes and the moment rate of all on-fault seismicity. We quantitatively test the hypothesis that microseismicity might define the transitions between locked and creeping regions. The calculated correlations are tested against a null hypothesis that microseismicity is randomly distributed. We further extend this investigation to the step over region between the Hayward and Calaveras faults to illuminate the interactions between linking faults.

  19. Sensor Fault-Tolerant Control of a Drivetrain Test Rig via an Observer-Based Approach within a Wind Turbine Simulation Model

    Science.gov (United States)

    Georg, Sören; Heyde, Stefan; Schulte, Horst

    2014-12-01

    This paper presents the implementation of an observer-based fault reconstruction and fault-tolerant control scheme on a rapid control prototyping system. The observer runs in parallel to a dynamic wind turbine simulation model and a speed controller, where the latter is used to control the shaft speed of a mechanical drivetrain according to the calculated rotor speed obtained from the wind turbine simulation. An incipient offset fault is added on the measured value of one of the two speed sensors and is reconstructed by means of a Takagi-Sugeno sliding- mode observer. The reconstructed fault value is then subtracted from the faulty sensor value to compensate for the fault. The whole experimental set-up corresponds to a sensor-in-the-loop system.

  20. Cage-rotor induction motor inter-turn short circuit fault detection with and without saturation effect by MEC model.

    Science.gov (United States)

    Naderi, Peyman

    2016-09-01

    The inter-turn short fault for the Cage-Rotor-Induction-Machine (CRIM) is studied in this paper and its local saturation is taken into account. However, in order to observe the exact behavior of machine, the Magnetic-Equivalent-Circuit (MEC) and nonlinear B-H curve are proposed to provide an insight into the machine model and saturation effect respectively. The electrical machines are generally operated near to their saturation zone due to some design necessities. Hence, when the machine is exposed to a fault such as short circuit or eccentricities, it is operated within its saturation zone and thus, time and space harmonics are integrated and as a result, current and torque harmonics are generated which the phenomenon cannot be explored when saturation is dismissed. Nonetheless, inter-turn short circuit may lead to local saturation and this occurrence is studied in this paper using MEC model. In order to achieve the mentioned objectives, two and also four-pole machines are modeled as two samples and the machines performances are analyzed in healthy and faulty cases with and without saturation effect. A novel strategy is proposed to precisely detect inter-turn short circuit fault according to the stator׳s lines current signatures and the accuracy of the proposed method is verified by experimental results.

  1. Modelling and Numerical Simulations of In-Air Reverberation Images for Fault Detection in Medical Ultrasonic Transducers: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    W. Kochański

    2015-01-01

    Full Text Available A simplified two-dimensional finite element model which simulates the in-air reverberation image produced by medical ultrasonic transducers has been developed. The model simulates a linear array consisting of 128 PZT-5A crystals, a tungsten-epoxy backing layer, an Araldite matching layer, and a Perspex lens layer. The thickness of the crystal layer is chosen to simulate pulses centered at 4 MHz. The model is used to investigate whether changes in the electromechanical properties of the individual transducer layers (backing layer, crystal layer, matching layer, and lens layer have an effect on the simulated in-air reverberation image generated. Changes in the electromechanical properties are designed to simulate typical medical transducer faults such as crystal drop-out, lens delamination, and deterioration in piezoelectric efficiency. The simulations demonstrate that fault-related changes in transducer behaviour can be observed in the simulated in-air reverberation image pattern. This exploratory approach may help to provide insight into deterioration in transducer performance and help with early detection of faults.

  2. 3D Faulting Numerical Model Related To 2009 L'Aquila Earthquake Based On DInSAR Observations

    Science.gov (United States)

    Castaldo, Raffaele; Tizzani, Pietro; Solaro, Giuseppe; Pepe, Susi; Lanari, Riccardo

    2014-05-01

    We investigate the surface displacements in the area affected by the April 6, 2009 L'Aquila earthquake (Central Italy) through an advanced 3D numerical modeling approach, by exploiting DInSAR deformation velocity maps based on ENVISAT (Ascending and Descending orbits) and COSMO-SkyMed data (Ascending orbit). We benefited from the available geological and geophysical information to investigate the impact of known buried structures on the modulation of the observed ground deformation field; in this context we implemented the a priori information in a Finite Element (FE) Environment considering a structural mechanical physical approach. The performed analysis demonstrate that the displacement pattern associated with the Mw 6.3 main-shock event is consistent with the activation of several fault segments of the Paganica fault. In particular, we analyzed the seismic events in a structural mechanical context under the plane stress mode approximation to solve for the retrieved displacements. We defined the sub-domain setting of the 3D FEM model using the information derived from the CROOP M-15 seismic line. We assumed stationarity and linear elasticity of the involved materials by considering a solution of classical equilibrium mechanical equations. We evolved our model through two stages: the model compacted under the weight of the rock successions (gravity loading) until it reached a stable equilibrium. At the second stage (co-seismic), where the stresses were released through a slip along the faults, by using an optimization procedure we retrieved: (i) the active seismogenic structures responsible for the observed ground deformation, (ii) the effects of the different mechanical constraints on the ground deformation pattern and (iii) the spatial distribution of the retrieved stress field. We evaluated the boundary setting best fit configuration responsible for the observed ground deformation. To this aim, we first generated several forward structural mechanical models

  3. Links between long-term and short-term rheology of the lithosphere: insights from strike-slip fault modelling

    Science.gov (United States)

    Le Pourhiet, Laetitia

    2014-05-01

    The study of geodetic data across strike-slip fault zones is believed to play a key role in our understanding of the lithosphere mechanical behaviour. InSAR and GPS measurements permits to determine more and more accurately both large and rapid co-seismic displacements and the slower deformation associated with the inter-seismic and post-seismic phases of the earthquake cycle on continents. However, no modern geodetic observation spans a complete earthquake cycle for any single fault in the world. Understanding this time variability through modelling is therefore crucial to reconstruct a global pattern. It is non trivial to compare the effective parameters retrieved from the different simple models are used to extract effective parameters from the geodetic data. Using the popular visco-elastic relaxation model reaches two paradoxes: - the lower crust must be very strong in order to fit the data long after the earthquake and very weak to fit the data during the early post-seismic period. - the retrieved a mantle lithosphere viscosity is as weak as 10^17 - 10^20 Pa.s and differ significantly from those deduced from post glacial rebound models and long term geodynamic models requirements in order to generate self consistent plate tectonics. Rather than assuming that the rheology of the lithosphere changes with time scale, it would be preferable to go on quest for an Earth's lithosphere rheological model based on some simple physics, which would be equally valid at all time scale from inter-seismic to orogeny. 3D models of long term strain localisation in wrenching context show that localisation of strain across strike slip faults modifies locally the rheological architecture of the lithosphere and lead to some sort of structural weakening. That weakening occurs because as strain localises the "jelly sandwich" type lithosphere evolves self-consistently into a "banana split" type rheological structure. This strain localisation process is very efficient when the lower

  4. Mapping Model of Groundwater Catchment Area based on Geological Fault : Case Study in Semarang City

    Directory of Open Access Journals (Sweden)

    Qudus, N.

    2016-04-01

    Full Text Available Groundwater is a naturally renewable resource because groundwater is an integral part of hydrological cycle. However, in reality, there are many limiting factors which influence its usage, in both quality and quantity, the provision ability of groundwater will decrease if its availability is exceeded. The problems of ground water potential in both quantity and quality are always related to its constituents' characteristics or its geological element where the groundwater resides. This present study aims at determining the groundwater catchment area based on the geological condition of an area so that groundwater recharge can be accomplished. In addition, it is necessary for groundwater catchment area to comply with the geological condition. The geologically unfit area will only result in land movement or landslide if it is used as groundwater catchment area. The results of geo-electricity analysis which was conducted in Semarang city showed that there are 3 faults; Sukorejo fault, Tinjomoyo fault and Jangli fault which will be explained in detail in the paper. Those faults intersect the underground water stream in Semarang from south to north towards the Java Sea. The majority of underground water stream in Semarang flows from south to north. In contrary, the results of the analysis showed that there are some points that become local basins such as in the south area and southwest of Semarang where flow direction is on the opposite direction. In addition, the results of the analysis showed that some coastal areas in Semarang have experienced salt water intrusion.

  5. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  6. EMD and Wavelet Transform Based Fault Diagnosis for Wind Turbine Gear Box

    Directory of Open Access Journals (Sweden)

    Qingyu Yang

    2013-01-01

    Full Text Available Wind turbines are mainly located in harsh environment, and the maintenance is therefore very difficult. The wind turbine faults are mostly from the gear box, and the fault signal is generally nonlinear and nonstationary. The traditional fault diagnosis methods such as Fast Fourier transform (FFT and the inverted frequency spectrum identification method based on FFT are not satisfactory in processing this kind of signal. This paper proposes a hybrid fault diagnosis method which combines the empirical mode decomposition (EMD and wavelet transform. The vibration signal i