WorldWideScience

Sample records for fault model aliasing

  1. Closed Form Aliasing Probability For Q-ary Symmetric Errors

    Directory of Open Access Journals (Sweden)

    Geetani Edirisooriya

    1996-01-01

    Full Text Available In Built-In Self-Test (BIST techniques, test data reduction can be achieved using Linear Feedback Shift Registers (LFSRs. A faulty circuit may escape detection due to loss of information inherent to data compaction schemes. This is referred to as aliasing. The probability of aliasing in Multiple-Input Shift-Registers (MISRs has been studied under various bit error models. By modeling the signature analyzer as a Markov process we show that the closed form expression derived for aliasing probability previously, for MISRs with primitive polynomials under q-ary symmetric error model holds for all MISRs irrespective of their feedback polynomials and for group cellular automata signature analyzers as well. If the erroneous behaviour of a circuit can be modelled with q-ary symmetric errors, then the test circuit complexity and propagation delay associated with the signature analyzer can be minimized by using a set of m single bit LFSRs without increasing the probability of aliasing.

  2. Aliasing modes in the lattice Schwinger model

    International Nuclear Information System (INIS)

    Campos, Rafael G.; Tututi, Eduardo S.

    2007-01-01

    We study the Schwinger model on a lattice consisting of zeros of the Hermite polynomials that incorporates a lattice derivative and a discrete Fourier transform with many properties. Such a lattice produces a Klein-Gordon equation for the boson field and the exact value of the mass in the asymptotic limit if the boundaries are not taken into account. On the contrary, if the lattice is considered with boundaries new modes appear due to aliasing effects. In the continuum limit, however, this lattice yields also a Klein-Gordon equation with a reduced mass

  3. Audible Aliasing Distortion in Digital Audio Synthesis

    Directory of Open Access Journals (Sweden)

    J. Schimmel

    2012-04-01

    Full Text Available This paper deals with aliasing distortion in digital audio signal synthesis of classic periodic waveforms with infinite Fourier series, for electronic musical instruments. When these waveforms are generated in the digital domain then the aliasing appears due to its unlimited bandwidth. There are several techniques for the synthesis of these signals that have been designed to avoid or reduce the aliasing distortion. However, these techniques have high computing demands. One can say that today's computers have enough computing power to use these methods. However, we have to realize that today’s computer-aided music production requires tens of multi-timbre voices generated simultaneously by software synthesizers and the most of the computing power must be reserved for hard-disc recording subsystem and real-time audio processing of many audio channels with a lot of audio effects. Trivially generated classic analog synthesizer waveforms are therefore still effective for sound synthesis. We cannot avoid the aliasing distortion but spectral components produced by the aliasing can be masked with harmonic components and thus made inaudible if sufficient oversampling ratio is used. This paper deals with the assessment of audible aliasing distortion with the help of a psychoacoustic model of simultaneous masking and compares the computing demands of trivial generation using oversampling with those of other methods.

  4. Anti-Aliasing filter for reverse-time migration

    KAUST Repository

    Zhan, Ge

    2012-01-01

    We develop an anti-aliasing filter for reverse-time migration (RTM). It is similar to the traditional anti-aliasing filter used for Kirchhoff migration in that it low-pass filters the migration operator so that the dominant wavelength in the operator is greater than two times the trace sampling interval, except it is applied to both primary and multiple reflection events. Instead of applying this filter to the data in the traditional RTM operation, we apply the anti-aliasing filter to the generalized diffraction-stack migration operator. This gives the same migration image as computed by anti-aliased RTM. Download

  5. Spatial aliasing and distortion of energy distribution in the wave vector domain under multi-spacecraft measurements

    Directory of Open Access Journals (Sweden)

    Y. Narita

    2009-08-01

    Full Text Available Aliasing is a general problem in the analysis of any measurements that make sampling at discrete points. Sampling in the spatial domain results in a periodic pattern of spectra in the wave vector domain. This effect is called spatial aliasing, and it is of particular importance for multi-spacecraft measurements in space. We first present the theoretical background of aliasing problems in the frequency domain and generalize it to the wave vector domain, and then present model calculations of spatial aliasing. The model calculations are performed for various configurations of the reciprocal vectors and energy spectra or distribution that are placed at different positions in the wave vector domain, and exhibit two effects on aliasing. One is weak aliasing, in which the true spectrum is distorted because of non-uniform aliasing contributions in the Brillouin zone. It is demonstrated that the energy distribution becomes elongated in the shortest reciprocal lattice vector direction in the wave vector domain. The other effect is strong aliasing, in which aliases have a significant contribution in the Brillouin zone and the energy distribution shows a false peak. These results give a caveat in multi-spacecraft data analysis in that spectral anisotropy obtained by a measurement has in general two origins: (1 natural and physical origins like anisotropy imposed by a mean magnetic field or a flow direction; and (2 aliasing effects that are imposed by the configuration of the measurement array (or the set of reciprocal vectors. This manuscript also discusses a possible method to estimate aliasing contributions in the Brillouin zone based on the measured spectrum and to correct the spectra for aliasing.

  6. Analysis of aliasing artifacts in 16-slice helical CT

    International Nuclear Information System (INIS)

    Chen Wei; Liu Jingkang; Ou Xiaoguang; Li Wenzheng; Liao Weihua; Yan Ang

    2006-01-01

    Objective: To recognize the features of aliasing artifacts on CT images, and to investigate the effects of imaging parameters on the magnitude of this artifacts. Methods: An adult dry skull was placed in a plastic water-filled container and scanned with a PHILIPS 16-slice helical CT. All the acquired transaxial images by using several different acquisition or reconstruction parameters were examined for comparative assessment of the aliasing artifacts. Results: The aliasing artifacts could be seen in most instances and characterized as the spokewise patterns emanating from the edges of high contrast structure as its radius varies sharply in the longitudinal direction. The images that scanned with pitch of 0.3, 0.6 and 0.9, respectively, showed aliasing artifacts, and its severities increased with pitches escalated (detector combination 16 x 1.5, reconstruction thickness 2 mm); There were more significant aliasing artifacts on the images reconstructed with 0.8 mm slice width compared with 1-mm slice width, and no aliasing artifacts were observed on the images reconstructed with 2-mm slice width (detector combination 16 x 0.75, pitch 0.6); No artifacts were perceived on the images scanned with detector combination 16 x 0.75, while presented evidently with the use of detector combination 16 x 1.5 (pitch 0.6, reconstruction thickness 2 mm); The degrees of aliasing artifacts were unaltered when reconstruction interval and tube current changed. Conclusions: Aliasing artifacts are caused by undersampling. When the operator choose the thinner sampling thickness, lower pitch and a much wider reconstruction thickness judiciously, aliasing artifacts could be effectively mitigated or suppressed. (authors)

  7. Partial volume and aliasing artefacts in helical cone-beam CT

    International Nuclear Information System (INIS)

    Zou Yu; Sidky, Emil Y; Pan, Xiaochuan

    2004-01-01

    A generalization of the quasi-exact algorithms of Kudo et al (2000 IEEE Trans. Med. Imaging 19 902-21) is developed that allows for data acquisition in a 'practical' frame for clinical diagnostic helical, cone-beam computed tomography (CT). The algorithm is investigated using data that model nonlinear partial volume averaging. This investigation leads to an understanding of aliasing artefacts in helical, cone-beam CT image reconstruction. An ad hoc scheme is proposed to mitigate artefacts due to the nonlinear partial volume and aliasing artefacts

  8. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    Science.gov (United States)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  9. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    International Nuclear Information System (INIS)

    Zhu Yan-Chun; Yang Wen-Chao; Wang Hao-Yu; Gao Song; Bao Shang-Lian; Du Jiang; Duan Chai-Jie

    2014-01-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts

  10. Research on the Frequency Aliasing of Resistance Acceleration Guidance for Reentry Flight

    Directory of Open Access Journals (Sweden)

    Han Pengxin

    2017-01-01

    Full Text Available According to the special response of resistance acceleration during hypersonic reentry flight, different guidance frequency will result to very different flight and control response. The analysis model for the response of resistance acceleration to the attack angle and dynamic press is put forward respectively in this paper. And the frequency aliasing phenomenon of guidance is revealed. The simulation results to the same vehicle sufficiently substantiate the frequency aliasing of resistance acceleration during reentry guidance.

  11. Adaptive attenuation of aliased ground roll using the shearlet transform

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  12. Method for suppressing aliasing artifacts in R/R-type CT

    International Nuclear Information System (INIS)

    Mori, Issei; Igarashi, Narumi; Kazama, Masahiro; Taguchi, Katsuyuki

    2003-01-01

    The Quarter-Quarter (QQ) offset method is a well-established technique for suppressing aliasing artifacts in R/R-type CT. However, the perfect alignment required for the QQ offset is generally difficult to achieve in practice. Depending on the scanner design, it may even be impossible to achieve. Because of this imperfection, the images contain some aliasing artifacts. This problem is becoming more serious with the spread of multislice CT and the increasingly common use of thin-slice imaging, in which aliasing is inherently stronger. We propose a simple method that suppresses such aliasing artifacts effectively while minimizing degradation in the spatial resolution. We exploit the fact that the frequency transfer functions are different for the main spectrum and the aliasing spectrum when the back-projection offset is not identical to the sampling offset. The selection of an appropriate back-projection offset results in an effective notch filter against the aliasing spectrum and a wide-band filter for the main spectrum. The results of simulation experiments support our theory, and experiments using an actual machine have shown that the image quality obtained by our method is comparable to that of perfect QQ even when the QQ alignment conditions are severely violated. (author)

  13. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  14. RAY TRACING RENDER MENGGUNAKAN FRAGMENT ANTI ALIASING

    Directory of Open Access Journals (Sweden)

    Febriliyan Samopa

    2008-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Rendering is generating surface and three-dimensional effects on an object displayed on a monitor screen. Ray tracing as a rendering method that traces ray for each image pixel has a drawback, that is, aliasing (jaggies effect. There are some methods for executing anti aliasing. One of those methods is OGSS (Ordered Grid Super Sampling. OGSS is able to perform aliasing well. However, this method requires more computation time since sampling of all pixels in the image will be increased. Fragment Anti Aliasing (FAA is a new alternative method that can cope with the drawback. FAA will check the image when performing rendering to a scene. Jaggies effect is only happened at curve and gradient object. Therefore, only this part of object that will experience sampling magnification. After this sampling magnification and the pixel values are computed, then downsample is performed to retrieve the original pixel values. Experimental results show that the software can implement ray tracing well in order to form images, and it can implement FAA and OGSS technique to perform anti aliasing. In general, rendering using FAA is faster than using OGSS

  15. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  16. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  17. On the aliasing of the solar cycle in the lower stratospheric tropical temperature

    Science.gov (United States)

    Kuchar, Ales; Ball, William T.; Rozanov, Eugene V.; Stenke, Andrea; Revell, Laura; Miksovsky, Jiri; Pisoft, Petr; Peter, Thomas

    2017-09-01

    The double-peaked response of the tropical stratospheric temperature profile to the 11 year solar cycle (SC) has been well documented. However, there are concerns about the origin of the lower peak due to potential aliasing with volcanic eruptions or the El Niño-Southern Oscillation (ENSO) detected using multiple linear regression analysis. We confirm the aliasing using the results of the chemistry-climate model (CCM) SOCOLv3 obtained in the framework of the International Global Atmospheric Chemisty/Stratosphere-troposphere Processes And their Role in Climate Chemistry-Climate Model Initiative phase 1. We further show that even without major volcanic eruptions included in transient simulations, the lower stratospheric response exhibits a residual peak when historical sea surface temperatures (SSTs)/sea ice coverage (SIC) are used. Only the use of climatological SSTs/SICs in addition to background stratospheric aerosols removes volcanic and ENSO signals and results in an almost complete disappearance of the modeled solar signal in the lower stratospheric temperature. We demonstrate that the choice of temporal subperiod considered for the regression analysis has a large impact on the estimated profile signal in the lower stratosphere: at least 45 consecutive years are needed to avoid the large aliasing effect of SC maxima with volcanic eruptions in 1982 and 1991 in historical simulations, reanalyses, and observations. The application of volcanic forcing compiled for phase 6 of the Coupled Model Intercomparison Project (CMIP6) in the CCM SOCOLv3 reduces the warming overestimation in the tropical lower stratosphere and the volcanic aliasing of the temperature response to the SC, although it does not eliminate it completely.

  18. Aliasing in the Complex Cepstrum of Linear-Phase Signals

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen

    1997-01-01

    Assuming linear-phase of the associated time signal, this paper presents an approximated analytical description of the unavoidable aliasing in practical use of complex cepstrums. The linear-phase assumption covers two major applications of complex cepstrums which are linear- to minimum-phase FIR......-filter transformation and minimum-phase estimation from amplitude specifications. The description is made in the cepstrum domain, the Fourier transform of the complex cepstrum and in the frequency domain. Two examples are given, one for verification of the derived equations and one using the description to reduce...... aliasing in minimum-phase estimation...

  19. Shearlet transform in aliased ground roll attenuation and its comparison with f-k filtering and curvelet transform

    Science.gov (United States)

    Abolfazl Hosseini, Seyed; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-06-01

    Ground roll, which is a Rayleigh surface wave that exists in land seismic data, may mask reflections. Sometimes ground roll is spatially aliased. Attenuation of aliased ground roll is of importance in seismic data processing. Different methods have been developed to attenuate ground roll. The shearlet transform is a directional and multidimensional transform that generates subimages of an input image in different directions and scales. Events with different dips are separated in these subimages. In this study, the shearlet transform is used to attenuate the aliased ground roll. To do this, a shot record is divided into several segments, and the appropriate mute zone is defined for all segments. The shearlet transform is applied to each segment. The subimages related to the non-aliased and aliased ground roll are identified by plotting the energy distributions of subimages with visual checking. Then, muting filters are used on selected subimages. The inverse shearlet transform is applied to the filtered segment. This procedure is repeated for all segments. Finally, all filtered segments are merged using the Hanning window. This method of aliased ground roll attenuation was tested on a synthetic dataset and a field shot record from the west of Iran. The synthetic shot record included strong aliased ground roll, whereas the field shot record did not. To produce the strong aliased ground roll on the field shot record, the data were resampled in the offset direction from 30 to 60 m. To show the performance of the shearlet transform in attenuating the aliased ground roll, we compared the shearlet transform with the f-k filtering and curvelet transform. We showed that the performance of the shearlet transform in the aliased ground roll attenuation is better than that of the f-k filtering and curvelet transform in both the synthetic and field shot records. However, when the dip and frequency content of the aliased ground roll are the same as the reflections, ability of

  20. Pitch dependence of longitudinal sampling and aliasing effects in multi-slice helical computed tomography (CT)

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2002-01-01

    In this work, we investigate longitudinal sampling and aliasing effects in multi-slice helical CT. We demonstrate that longitudinal aliasing can be a significant, complicated, and potentially detrimental effect in multi-slice helical CT reconstructions. Multi-slice helical CT scans are generally undersampled longitudinally for all pitches of clinical interest, and the resulting aliasing effects are spatially variant. As in the single-slice case, aliasing is shown to be negligible at the isocentre for circularly symmetric objects due to a fortuitous aliasing cancellation phenomenon. However, away from the isocentre, aliasing effects can be significant, spatially variant, and highly pitch dependent. This implies that measures more sophisticated than isocentre slice sensitivity profiles are needed to characterize longitudinal properties of multi-slice helical CT systems. Such measures are particularly important in assessing the question of whether there are preferred pitches in helical CT. Previous analyses have generally focused only on isocentre sampling patterns, and our more global analysis leads to somewhat different conclusions than have been reached before, suggesting that pitches 3, 4, 5, and 6 are favourable, and that half-integer pitches are somewhat suboptimal. (author)

  1. Optimization of the reconstruction and anti-aliasing filter in a Wiener filter system

    NARCIS (Netherlands)

    Wesselink, J.M.; Berkhoff, Arthur P.

    2006-01-01

    This paper discusses the influence of the reconstruction and anti-aliasing filters on the performance of a digital implementation of a Wiener filter for active noise control. The overall impact will be studied in combination with a multi-rate system approach. A reconstruction and anti-aliasing

  2. A simple approach to Fourier aliasing

    International Nuclear Information System (INIS)

    Foadi, James

    2007-01-01

    In the context of discrete Fourier transforms the idea of aliasing as due to approximation errors in the integral defining Fourier coefficients is introduced and explained. This has the positive pedagogical effect of getting to the heart of sampling and the discrete Fourier transform without having to delve into effective, but otherwise long and structured, introductions to the topic, commonly met in advanced, specialized books

  3. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  4. Frequency-Shift a way to Reduce Aliasing in the Complex Cepstrum

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen

    1998-01-01

    The well-known relation between a time signal and its frequency-shifted spectrum is introduced as an excellent tool for reduction of aliasing in the complex cepstrum. Using N points DFTs the frequency-shift property, when used in the right way, will reduce the aliasing error to a size which...... on average is identical to the one normally requiring 2N points DFTs. The cost is an insignificant increase in the number of operations compared to the total number needed for the transformation to the complex cepstrum domain...

  5. Spectral analysis of highly aliased sea-level signals

    Science.gov (United States)

    Ray, Richard D.

    1998-10-01

    Observing high-wavenumber ocean phenomena with a satellite altimeter generally calls for "along-track" analyses of the data: measurements along a repeating satellite ground track are analyzed in a point-by-point fashion, as opposed to spatially averaging data over multiple tracks. The sea-level aliasing problems encountered in such analyses can be especially challenging. For TOPEX/POSEIDON, all signals with frequency greater than 18 cycles per year (cpy), including both tidal and subdiurnal signals, are folded into the 0-18 cpy band. Because the tidal bands are wider than 18 cpy, residual tidal cusp energy, plus any subdiurnal energy, is capable of corrupting any low-frequency signal of interest. The practical consequences of this are explored here by using real sea-level measurements from conventional tide gauges, for which the true oceanographic spectrum is known and to which a simulated "satellite-measured" spectrum, based on coarsely subsampled data, may be compared. At many locations the spectrum is sufficently red that interannual frequencies remain unaffected. Intra-annual frequencies, however, must be interpreted with greater caution, and even interannual frequencies can be corrupted if the spectrum is flat. The results also suggest that whenever tides must be estimated directly from the altimetry, response methods of analysis are preferable to harmonic methods, even in nonlinear regimes; this will remain so for the foreseeable future. We concentrate on three example tide gauges: two coastal stations on the Malay Peninsula where the closely aliased K1 and Ssa tides are strong and at Canton Island where trapped equatorial waves are aliased.

  6. RADIAL VELOCITY PLANETS DE-ALIASED: A NEW, SHORT PERIOD FOR SUPER-EARTH 55 Cnc e

    International Nuclear Information System (INIS)

    Dawson, Rebekah I.; Fabrycky, Daniel C.

    2010-01-01

    Radial velocity measurements of stellar reflex motion have revealed many extrasolar planets, but gaps in the observations produce aliases, spurious frequencies that are frequently confused with the planets' orbital frequencies. In the case of Gl 581 d, the distinction between an alias and the true frequency was the distinction between a frozen, dead planet and a planet possibly hospitable to life. To improve the characterization of planetary systems, we describe how aliases originate and present a new approach for distinguishing between orbital frequencies and their aliases. Our approach harnesses features in the spectral window function to compare the amplitude and phase of predicted aliases with peaks present in the data. We apply it to confirm prior alias distinctions for the planets GJ 876 d and HD 75898 b. We find that the true periods of Gl 581 d and HD 73526 b/c remain ambiguous. We revise the periods of HD 156668 b and 55 Cnc e, which were afflicted by daily aliases. For HD 156668 b, the correct period is 1.2699 days and the minimum mass is (3.1 ± 0.4) M + . For 55 Cnc e, the correct period is 0.7365 days-the shortest of any known planet-and the minimum mass is (8.3 ± 0.3) M + . This revision produces a significantly improved five-planet Keplerian fit for 55 Cnc, and a self-consistent dynamical fit describes the data just as well. As radial velocity techniques push to ever-smaller planets, often found in systems of multiple planets, distinguishing true periods from aliases will become increasingly important.

  7. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  8. Dynamic modeling of gearbox faults: A review

    Science.gov (United States)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  9. Aliasing characteristics of tau-P transform and is application to signal and noise separation; Tau-P henkan no aliasing tokusei to hakei iji wo koryoshita S/N bunri

    Energy Technology Data Exchange (ETDEWEB)

    Kawabuchi, H; Rokugawa, S; Matsushima, J; Ichie, Y [The University of Tokyo, Tokyo (Japan); Minegishi, M; Tsuburaya, Y [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    With respect to the tau-P transform method as a signal and noise (S/N) separation technology used in seismic exploration using the reflection method, a discussion has been given on conditions for the post S/N separation by the tau-P transform to function more effectively. Averaging the energy in performing the tau-P transform makes the wave energy scatter to a certain range. As a result, an aliasing phenomenon appears, in which noise is superimposed on the post-processing record. As a result of the discussion, it was verified that satisfying the two equations of G. Turner is effective in order to reduce the aliasing and maintain the relative amplitude. However, in actual calculation accuracy, waveform change was recognized to some extent, particularly amplification of events in low frequencies, and low restorability in higher frequencies. It was also observed that a method to give the tau-P region a two-dimensional Fourier transform and perform the same processing as an f-k filter can remove aliasing more simply and effectively than the HVF, and improve the S/N ratio maintaining the amplitude at the current level. 5 refs., 13 figs.

  10. Aliasing effects in digital images of line-pair phantoms

    International Nuclear Information System (INIS)

    Albert, Michael; Beideck, Daniel J.; Bakic, Predrag R.; Maidment, Andrew D.A.

    2002-01-01

    Line-pair phantoms are commonly used for evaluating screen-film systems. When imaged digitally, aliasing effects give rise to additional periodic patterns. This paper examines one such effect that medical physicists are likely to encounter, and which can be used as an indicator of super-resolution

  11. FSN-based fault modelling for fault detection and troubleshooting in CANDU stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, E., E-mail: elnara.nasimi@brucepower.com [Bruce Power LLP., Tiverton, Ontario(Canada); Gabbar, H.A. [Univ. of Ontario Inst. of Tech., Oshawa, Ontario (Canada)

    2013-07-01

    An accurate fault modeling and troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities of current and future generation of CANDU designs. This paper presents fault modeling approach using Fault Semantic Network (FSN) methodology with risk estimation. Its application is demonstrated using a case study of Bruce B zone-control level oscillations. (author)

  12. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  13. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  14. An Improved Method Based on CEEMD for Fault Diagnosis of Rolling Bearing

    Directory of Open Access Journals (Sweden)

    Meijiao Li

    2014-11-01

    Full Text Available In order to improve the effectiveness for identifying rolling bearing faults at an early stage, the present paper proposed a method that combined the so-called complementary ensemble empirical mode decomposition (CEEMD method with a correlation theory for fault diagnosis of rolling element bearing. The cross-correlation coefficient between the original signal and each intrinsic mode function (IMF was calculated in order to reduce noise and select an effective IMF. Using the present method, a rolling bearing fault experiment with vibration signals measured by acceleration sensors was carried out, and bearing inner race and outer race defect at a varying rotating speed with different degrees of defect were analyzed. And the proposed method was compared with several algorithms of empirical mode decomposition (EMD to verify its effectiveness. Experimental results showed that the proposed method was available for detecting the bearing faults and able to detect the fault at an early stage. It has higher computational efficiency and is capable of overcoming modal mixing and aliasing. Therefore, the proposed method is more suitable for rolling bearing diagnosis.

  15. Research on Weak Fault Extraction Method for Alleviating the Mode Mixing of LMD

    Directory of Open Access Journals (Sweden)

    Lin Zhang

    2018-05-01

    Full Text Available Compared with the strong background noise, the energy entropy of early fault signals of bearings are weak under actual working conditions. Therefore, extracting the bearings’ early fault features has always been a major difficulty in fault diagnosis of rotating machinery. Based on the above problems, the masking method is introduced into the Local Mean Decomposition (LMD decomposition process, and a weak fault extraction method based on LMD and mask signal (MS is proposed. Due to the mode mixing of the product function (PF components decomposed by LMD in the noisy background, it is difficult to distinguish the authenticity of the fault frequency. Therefore, the MS method is introduced to deal with the PF components that are decomposed by the LMD and have strong correlation with the original signal, so as to suppress the modal aliasing phenomenon and extract the fault frequencies. In this paper, the actual fault signal of the rolling bearing is analyzed. By combining the MS method with the LMD method, the fault signal mixed with the noise is processed. The kurtosis value at the fault frequency is increased by eight-fold, and the signal-to-noise ratio (SNR is increased by 19.1%. The fault signal is successfully extracted by the proposed composite method.

  16. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  17. Stator Fault Modelling of Induction Motors

    DEFF Research Database (Denmark)

    Thomsen, Jesper Sandberg; Kallesøe, Carsten

    2006-01-01

    In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real measurem......In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real...... measurements from a specially designed induction motor. With this motor it is possible to simulate both terminal disconnections, inter-turn and turn-turn short circuits. The results show good agreement between the measurements and the simulated signals obtained from the model. In the tests focus...

  18. SDEM modelling of fault-propagation folding

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Poulsen, Jane Bang

    2009-01-01

    and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset......Understanding the dynamics and kinematics of fault-propagation-folding is important for evaluating the associated hydrocarbon play, for accomplishing reliable section balancing (structural reconstruction), and for assessing seismic hazards. Accordingly, the deformation style of fault-propagation...... a precise indication of when faults develop and hence also the sequential evolution of secondary faults. Here we focus on the generation of a fault -propagated fold with a reverse sense of motion at the master fault, and varying only the dip of the master fault and the mechanical behaviour of the deformed...

  19. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M; Haenninen, S [VTT Energy, Espoo (Finland); Seppaenen, M [North-Carelian Power Co (Finland); Antila, E; Markkila, E [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  20. Modeling and Fault Simulation of Propellant Filling System

    International Nuclear Information System (INIS)

    Jiang Yunchun; Liu Weidong; Hou Xiaobo

    2012-01-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  1. Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.

    Science.gov (United States)

    Wang, Tao; Xie, Wenfang; Zhang, Youmin

    2012-05-01

    In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    Science.gov (United States)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  3. Wide-Angle Multistatic Synthetic Aperture Radar: Focused Image Formation and Aliasing Artifact Mitigation

    National Research Council Canada - National Science Library

    Luminati, Jonathan E

    2005-01-01

    ...) imagery from a Radar Cross Section (RCS) chamber validates this approach. The second implementation problem stems from the large Doppler spread in the wide-angle scene, leading to severe aliasing problems...

  4. Bond graph model-based fault diagnosis of hybrid systems

    CERN Document Server

    Borutzky, Wolfgang

    2015-01-01

    This book presents a bond graph model-based approach to fault diagnosis in mechatronic systems appropriately represented by a hybrid model. The book begins by giving a survey of the fundamentals of fault diagnosis and failure prognosis, then recalls state-of-art developments referring to latest publications, and goes on to discuss various bond graph representations of hybrid system models, equations formulation for switched systems, and simulation of their dynamic behavior. The structured text: • focuses on bond graph model-based fault detection and isolation in hybrid systems; • addresses isolation of multiple parametric faults in hybrid systems; • considers system mode identification; • provides a number of elaborated case studies that consider fault scenarios for switched power electronic systems commonly used in a variety of applications; and • indicates that bond graph modelling can also be used for failure prognosis. In order to facilitate the understanding of fault diagnosis and the presented...

  5. Fault condition stress analysis of NET 16 TF coil model

    International Nuclear Information System (INIS)

    Jong, C.T.J.

    1992-04-01

    As part of the design process of the NET/ITER toroidal field coils (TFCs), the mechanical behaviour of the magnetic system under fault conditions has to be analysed in some detail. Under fault conditions, either electrical or mechanical, the magnetic loading of the coils becomes extreme and further mechanical failure of parts of the overall structure might occur (e.g. failure of the coil, gravitational support, intercoil structure). The mechanical behaviour of the magnetic system under fault conditions has been analysed with a finite element model of the complete TFC system. The analysed fault conditions consist of: a thermal fault, electrical faults and mechanical faults. The mechanical faults have been applied simultaneously with an electrical fault. This report described the work carried out to create the finite element model of 16 TFCs and contains an extensive presentation of the results, obtained with this model, of a normal operating condition analysis and 9 fault condition analyses. Chapter 2-5 contains a detailed description of the finite element model, boundary conditions and loading conditions of the analyses made. Chapters 2-4 can be skipped if the reader is only interested in results. To understand the results presented chapter 6 is recommended, which contains a detailed description of all analysed fault conditions. The dimensions and geometry of the model correspond to the status of the NET/ITER TFC design of May 1990. Compared with previous models of the complete magnetic system, the finite element model of 16 TFCs is 'detailed', and can be used for linear elastic analysis with faulted loads. (author). 8 refs.; 204 figs.; 134 tabs

  6. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    Science.gov (United States)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  7. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  8. A 3D modeling approach to complex faults with multi-source data

    Science.gov (United States)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  9. Effects of Spatio-Temporal Aliasing on Pilot Performance in Active Control Tasks

    Science.gov (United States)

    Zaal, Peter; Sweet, Barbara

    2010-01-01

    Spatio-temporal aliasing affects pilot performance and control behavior. For increasing refresh rates: 1) Significant change in control behavior: a) Increase in visual gain and neuromuscular frequency. b) Decrease in visual time delay. 2) Increase in tracking performance: a) Decrease in RMSe. b) Increase in crossover frequency.

  10. Fault Diagnosis of Nonlinear Systems Using Structured Augmented State Models

    Institute of Scientific and Technical Information of China (English)

    Jochen Aβfalg; Frank Allg(o)wer

    2007-01-01

    This paper presents an internal model approach for modeling and diagnostic functionality design for nonlinear systems operating subject to single- and multiple-faults. We therefore provide the framework of structured augmented state models. Fault characteristics are considered to be generated by dynamical exosystems that are switched via equality constraints to overcome the augmented state observability limiting the number of diagnosable faults. Based on the proposed model, the fault diagnosis problem is specified as an optimal hybrid augmented state estimation problem. Sub-optimal solutions are motivated and exemplified for the fault diagnosis of the well-known three-tank benchmark. As the considered class of fault diagnosis problems is large, the suggested approach is not only of theoretical interest but also of high practical relevance.

  11. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  12. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  13. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  14. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    Science.gov (United States)

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using

  15. Nonlinear Model-Based Fault Detection for a Hydraulic Actuator

    NARCIS (Netherlands)

    Van Eykeren, L.; Chu, Q.P.

    2011-01-01

    This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost

  16. Mechanical Models of Fault-Related Folding

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  17. High-resolution 3D-GRE imaging of the abdomen using controlled aliasing acceleration technique - a feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    AlObaidy, Mamdoh; Ramalho, Miguel; Busireddy, Kiran K.R.; Liu, Baodong; Burke, Lauren M.; Altun, Ersan; Semelka, Richard C. [University of North Carolina at Chapel Hill, Department of Radiology, Chapel Hill, NC (United States); Dale, Brian M. [Siemens Medical Solutions, MR Research and Development, Morrisville, NC (United States)

    2015-12-15

    To assess the feasibility of high-resolution 3D-gradient-recalled echo (GRE) fat-suppressed T1-weighted images using controlled aliasing acceleration technique (CAIPIRINHA-VIBE), and compare image quality and lesion detection to standard-resolution 3D-GRE images using conventional acceleration technique (GRAPPA-VIBE). Eighty-four patients (41 males, 43 females; age range: 14-90 years, 58.8 ± 15.6 years) underwent abdominal MRI at 1.5 T with CAIPIRINHA-VIBE [spatial resolution, 0.76 ± 0.04 mm] and GRAPPA-VIBE [spatial resolution, 1.17 ± 0.14 mm]. Two readers independently reviewed image quality, presence of artefacts, lesion conspicuity, and lesion detection. Kappa statistic was used to assess interobserver agreement. Wilcoxon signed-rank test was used for image qualitative pairwise comparisons. Logistic regression with post-hoc testing was used to evaluate statistical significance of lesions evaluation. Interobserver agreement ranged between 0.45-0.93. Pre-contrast CAIPIRINHA-VIBE showed significantly (p < 0.001) sharper images and lesion conspicuity with decreased residual aliasing, but more noise enhancement and inferior image quality. Post-contrast CAIPIRINHA-VIBE showed significantly (p < 0.001) sharper images and higher lesion conspicuity, with less respiratory motion and residual aliasing artefacts. Inferior fat-suppression was noticeable on CAIPIRINHA-VIBE sequences (p < 0.001). High in-plane resolution abdominal 3D-GRE fat-suppressed T1-weighted imaging using controlled-aliasing acceleration technique is feasible and yields sharper images compared to standard-resolution images using standard acceleration, with higher post-contrast image quality and trend for improved hepatic lesions detection. (orig.)

  18. High-resolution 3D-GRE imaging of the abdomen using controlled aliasing acceleration technique - a feasibility study

    International Nuclear Information System (INIS)

    AlObaidy, Mamdoh; Ramalho, Miguel; Busireddy, Kiran K.R.; Liu, Baodong; Burke, Lauren M.; Altun, Ersan; Semelka, Richard C.; Dale, Brian M.

    2015-01-01

    To assess the feasibility of high-resolution 3D-gradient-recalled echo (GRE) fat-suppressed T1-weighted images using controlled aliasing acceleration technique (CAIPIRINHA-VIBE), and compare image quality and lesion detection to standard-resolution 3D-GRE images using conventional acceleration technique (GRAPPA-VIBE). Eighty-four patients (41 males, 43 females; age range: 14-90 years, 58.8 ± 15.6 years) underwent abdominal MRI at 1.5 T with CAIPIRINHA-VIBE [spatial resolution, 0.76 ± 0.04 mm] and GRAPPA-VIBE [spatial resolution, 1.17 ± 0.14 mm]. Two readers independently reviewed image quality, presence of artefacts, lesion conspicuity, and lesion detection. Kappa statistic was used to assess interobserver agreement. Wilcoxon signed-rank test was used for image qualitative pairwise comparisons. Logistic regression with post-hoc testing was used to evaluate statistical significance of lesions evaluation. Interobserver agreement ranged between 0.45-0.93. Pre-contrast CAIPIRINHA-VIBE showed significantly (p < 0.001) sharper images and lesion conspicuity with decreased residual aliasing, but more noise enhancement and inferior image quality. Post-contrast CAIPIRINHA-VIBE showed significantly (p < 0.001) sharper images and higher lesion conspicuity, with less respiratory motion and residual aliasing artefacts. Inferior fat-suppression was noticeable on CAIPIRINHA-VIBE sequences (p < 0.001). High in-plane resolution abdominal 3D-GRE fat-suppressed T1-weighted imaging using controlled-aliasing acceleration technique is feasible and yields sharper images compared to standard-resolution images using standard acceleration, with higher post-contrast image quality and trend for improved hepatic lesions detection. (orig.)

  19. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  20. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...

  1. Geometric analysis of alternative models of faulting at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Young, S.R.; Stirewalt, G.L.; Morris, A.P.

    1993-01-01

    Realistic cross section tectonic models must be retrodeformable to geologically reasonable pre-deformation states. Furthermore, it must be shown that geologic structures depicted on cross section tectonic models can have formed by kinematically viable deformation mechanisms. Simple shear (i.e., listric fault models) is consistent with extensional geologic structures and fault patterns described at Yucca Mountain, Nevada. Flexural slip models yield results similar to oblique simple shear mechanisms, although there is no strong geological evidence for flexural slip deformation. Slip-line deformation is shown to generate fault block geometrics that are a close approximation to observed fault block structures. However, slip-line deformation implies a degree of general ductility for which there is no direct geological evidence. Simple and hybrid 'domino' (i.e., planar fault) models do not adequately explain observed variations of fault block dip or the development of 'rollover' folds adjacent to major bounding faults. Overall tectonic extension may be underestimated because of syn-tectonic deposition (growth faulting) of the Tertiary pyroclastic rocks that comprise Yucca Mountain. A strong diagnostic test of the applicability of the domino model may be provided by improved knowledge of Tertiary volcanic stratigraphy

  2. Model-based fault diagnosis in PEM fuel cell systems

    Energy Technology Data Exchange (ETDEWEB)

    Escobet, T; de Lira, S; Puig, V; Quevedo, J [Automatic Control Department (ESAII), Universitat Politecnica de Catalunya (UPC), Rambla Sant Nebridi 10, 08222 Terrassa (Spain); Feroldi, D; Riera, J; Serra, M [Institut de Robotica i Informatica Industrial (IRI), Consejo Superior de Investigaciones Cientificas (CSIC), Universitat Politecnica de Catalunya (UPC) Parc Tecnologic de Barcelona, Edifici U, Carrer Llorens i Artigas, 4-6, Planta 2, 08028 Barcelona (Spain)

    2009-07-01

    In this work, a model-based fault diagnosis methodology for PEM fuel cell systems is presented. The methodology is based on computing residuals, indicators that are obtained comparing measured inputs and outputs with analytical relationships, which are obtained by system modelling. The innovation of this methodology is based on the characterization of the relative residual fault sensitivity. To illustrate the results, a non-linear fuel cell simulator proposed in the literature is used, with modifications, to include a set of fault scenarios proposed in this work. Finally, it is presented the diagnosis results corresponding to these fault scenarios. It is remarkable that with this methodology it is possible to diagnose and isolate all the faults in the proposed set in contrast with other well known methodologies which use the binary signature matrix of analytical residuals and faults. (author)

  3. Development of a fault test experimental facility model using Matlab

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez; Moraes, Davi Almeida, E-mail: martinez@ipen.br, E-mail: dmoraes@dk8.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The Fault Test Experimental Facility was developed to simulate a PWR nuclear power plant and is instrumented with temperature, level and pressure sensors. The Fault Test Experimental Facility can be operated to generate normal and fault data, and these failures can be added initially small, and their magnitude being increasing gradually. This work presents the Fault Test Experimental Facility model developed using the Matlab GUIDE (Graphical User Interface Development Environment) toolbox that consists of a set of functions designed to create interfaces in an easy and fast way. The system model is based on the mass and energy inventory balance equations. Physical as well as operational aspects are taken into consideration. The interface layout looks like a process flowchart and the user can set the input variables. Besides the normal operation conditions, there is the possibility to choose a faulty variable from a list. The program also allows the user to set the noise level for the input variables. Using the model, data were generated for different operational conditions, both under normal and fault conditions with different noise levels added to the input variables. Data generated by the model will be compared with Fault Test Experimental Facility data. The Fault Test Experimental Facility theoretical model results will be used for the development of a Monitoring and Fault Detection System. (author)

  4. Development of a fault test experimental facility model using Matlab

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Moraes, Davi Almeida

    2015-01-01

    The Fault Test Experimental Facility was developed to simulate a PWR nuclear power plant and is instrumented with temperature, level and pressure sensors. The Fault Test Experimental Facility can be operated to generate normal and fault data, and these failures can be added initially small, and their magnitude being increasing gradually. This work presents the Fault Test Experimental Facility model developed using the Matlab GUIDE (Graphical User Interface Development Environment) toolbox that consists of a set of functions designed to create interfaces in an easy and fast way. The system model is based on the mass and energy inventory balance equations. Physical as well as operational aspects are taken into consideration. The interface layout looks like a process flowchart and the user can set the input variables. Besides the normal operation conditions, there is the possibility to choose a faulty variable from a list. The program also allows the user to set the noise level for the input variables. Using the model, data were generated for different operational conditions, both under normal and fault conditions with different noise levels added to the input variables. Data generated by the model will be compared with Fault Test Experimental Facility data. The Fault Test Experimental Facility theoretical model results will be used for the development of a Monitoring and Fault Detection System. (author)

  5. Model-Based Methods for Fault Diagnosis: Some Guide-Lines

    DEFF Research Database (Denmark)

    Patton, R.J.; Chen, J.; Nielsen, S.B.

    1995-01-01

    This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties.......This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties....

  6. A Lateral Tensile Fracturing Model for Listric Fault

    Science.gov (United States)

    Qiu, Z.

    2007-12-01

    The new discovery of a major seismic fault of the great 1976 Tangshan earthquake suggests a lateral tensile fracturing process at the seismic source. The fault is in listric shape but can not be explained with the prevailing model of listric fault. A double-couple of forces without moment is demonstrated to be applicable to simulate the source mechanism. Based on fracture mechanics, laboratory experiments as well as numerical simulations, the model is against the assumption of stick-slip on existing fault as the cause of the earthquake but not in conflict with seismological observations. Global statistics of CMT solutions of great earthquakes raises significant support to the idea that lateral tensile fracturing might account for not only the Tangshan earthquake but also others.

  7. Fuzzy delay model based fault simulator for crosstalk delay fault test ...

    Indian Academy of Sciences (India)

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps.

  8. Modeling Fluid Flow in Faulted Basins

    Directory of Open Access Journals (Sweden)

    Faille I.

    2014-07-01

    Full Text Available This paper presents a basin simulator designed to better take faults into account, either as conduits or as barriers to fluid flow. It computes hydrocarbon generation, fluid flow and heat transfer on the 4D (space and time geometry obtained by 3D volume restoration. Contrary to classical basin simulators, this calculator does not require a structured mesh based on vertical pillars nor a multi-block structure associated to the fault network. The mesh follows the sediments during the evolution of the basin. It deforms continuously with respect to time to account for sedimentation, erosion, compaction and kinematic displacements. The simulation domain is structured in layers, in order to handle properly the corresponding heterogeneities and to follow the sedimentation processes (thickening of the layers. In each layer, the mesh is unstructured: it may include several types of cells such as tetrahedra, hexahedra, pyramid, prism, etc. However, a mesh composed mainly of hexahedra is preferred as they are well suited to the layered structure of the basin. Faults are handled as internal boundaries across which the mesh is non-matching. Different models are proposed for fault behavior such as impervious fault, flow across fault or conductive fault. The calculator is based on a cell centered Finite Volume discretisation, which ensures conservation of physical quantities (mass of fluid, heat at a discrete level and which accounts properly for heterogeneities. The numerical scheme handles the non matching meshes and guaranties appropriate connection of cells across faults. Results on a synthetic basin demonstrate the capabilities of this new simulator.

  9. Verification of Fault Tree Models with RBDGG Methodology

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    Currently, fault tree analysis is widely used in the field of probabilistic safety assessment (PSA) of nuclear power plants (NPPs). To guarantee the correctness of fault tree models, which are usually manually constructed by analysts, a review by other analysts is widely used for verifying constructed fault tree models. Recently, an extension of the reliability block diagram was developed, which is named as RBDGG (reliability block diagram with general gates). The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system and, therefore, the modeling of a system for a system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar to that of the development of the RGGG (Reliability Graph with General Gates) methodology. The difference between the RBDGG methodology and RGGG methodology is that the RBDGG methodology focuses on the block failures while the RGGG methodology focuses on the connection line failures. But, it is also known that an RGGG model can be converted to an RBDGG model and vice versa. In this paper, a new method for the verification of the constructed fault tree models using the RBDGG methodology is proposed and demonstrated

  10. Study on seismic hazard assessment of large active fault systems. Evolution of fault systems and associated geomorphic structures: fault model test and field survey

    International Nuclear Information System (INIS)

    Ueta, Keichi; Inoue, Daiei; Miyakoshi, Katsuyoshi; Miyagawa, Kimio; Miura, Daisuke

    2003-01-01

    Sandbox experiments and field surveys were performed to investigate fault system evolution and fault-related deformation of ground surface, the Quaternary deposits and rocks. The summary of the results is shown below. 1) In the case of strike-slip faulting, the basic fault sequence runs from early en echelon faults and pressure ridges through linear trough. The fault systems associated with the 2000 western Tottori earthquake are shown as en echelon pattern that characterize the early stage of wrench tectonics, therefore no thoroughgoing surface faulting was found above the rupture as defined by the main shock and aftershocks. 2) Low-angle and high-angle reverse faults commonly migrate basinward with time, respectively. With increasing normal fault displacement in bedrock, normal fault develops within range after reverse fault has formed along range front. 3) Horizontal distance of surface rupture from the bedrock fault normalized by the height of the Quaternary deposits agrees well with those of model tests. 4) Upward-widening damage zone, where secondary fractures develop, forms in the handing wall side of high-angle reverse fault at the Kamioka mine. (author)

  11. Alternative model of thrust-fault propagation

    Science.gov (United States)

    Eisenstadt, Gloria; de Paor, Declan G.

    1987-07-01

    A widely accepted explanation for the geometry of thrust faults is that initial failures occur on deeply buried planes of weak rock and that thrust faults propagate toward the surface along a staircase trajectory. We propose an alternative model that applies Gretener's beam-failure mechanism to a multilayered sequence. Invoking compatibility conditions, which demand that a thrust propagate both upsection and downsection, we suggest that ramps form first, at shallow levels, and are subsequently connected by flat faults. This hypothesis also explains the formation of many minor structures associated with thrusts, such as backthrusts, wedge structures, pop-ups, and duplexes, and provides a unified conceptual framework in which to evaluate field observations.

  12. Modeling fault rupture hazard for the proposed repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Coppersmith, K.J.; Youngs, R.R.

    1992-01-01

    In this paper as part of the Electric Power Research Institute's High Level Waste program, the authors have developed a preliminary probabilistic model for assessing the hazard of fault rupture to the proposed high level waste repository at Yucca Mountain. The model is composed of two parts: the earthquake occurrence model that describes the three-dimensional geometry of earthquake sources and the earthquake recurrence characteristics for all sources in the site vicinity; and the rupture model that describes the probability of coseismic fault rupture of various lengths and amounts of displacement within the repository horizon 350 m below the surface. The latter uses empirical data from normal-faulting earthquakes to relate the rupture dimensions and fault displacement amounts to the magnitude of the earthquake. using a simulation procedure, we allow for earthquake occurrence on all of the earthquake sources in the site vicinity, model the location and displacement due to primary faults, and model the occurrence of secondary faulting in conjunction with primary faulting

  13. Identifying technical aliases in SELDI mass spectra of complex mixtures of proteins

    Science.gov (United States)

    2013-01-01

    Background Biomarker discovery datasets created using mass spectrum protein profiling of complex mixtures of proteins contain many peaks that represent the same protein with different charge states. Correlated variables such as these can confound the statistical analyses of proteomic data. Previously we developed an algorithm that clustered mass spectrum peaks that were biologically or technically correlated. Here we demonstrate an algorithm that clusters correlated technical aliases only. Results In this paper, we propose a preprocessing algorithm that can be used for grouping technical aliases in mass spectrometry protein profiling data. The stringency of the variance allowed for clustering is customizable, thereby affecting the number of peaks that are clustered. Subsequent analysis of the clusters, instead of individual peaks, helps reduce difficulties associated with technically-correlated data, and can aid more efficient biomarker identification. Conclusions This software can be used to pre-process and thereby decrease the complexity of protein profiling proteomics data, thus simplifying the subsequent analysis of biomarkers by decreasing the number of tests. The software is also a practical tool for identifying which features to investigate further by purification, identification and confirmation. PMID:24010718

  14. Doppler Aliasing Reduction in Wide-Angle Synthetic Aperture Radar Using Phase Modulated Random Stepped-Frequency Waveforms

    National Research Council Canada - National Science Library

    Hyatt, Andrew W

    2006-01-01

    ...) waveforms in a Wide-Angle Synthetic Aperture Radar (WA-SAR) scenario. RSF waveforms have been demonstrated to have desirable properties which allow for cancelling of Doppler aliased scatterers in WA-SAR images...

  15. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  16. Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan

    Directory of Open Access Journals (Sweden)

    Yu-Wen Chang

    2017-01-01

    Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.

  17. Extended reach OFDM-PON using super-Nyquist image induced aliasing.

    Science.gov (United States)

    Guo, Changjian; Liang, Jiawei; Liu, Jie; Liu, Liu

    2015-08-24

    We investigate a novel dispersion compensating technique in double sideband (DSB) modulated and directed-detected (DD) passive optical network (PON) systems using super-Nyquist image induced aliasing. We show that diversity is introduced to the higher frequency components by deliberate aliasing using the super-Nyquist images. We then propose to use fractional sampling and per-subcarrier maximum ratio combining (MRC) to harvest this diversity. We evaluate the performance of conventional orthogonal frequency division multiplexing (OFDM) signals along with discrete Fourier transform spread (DFT-S) OFDM and code-division multiplexing OFDM (CDM-OFDM) signals using the proposed scheme. The results show that the DFT-S OFDM signal has the best performance due to spectrum spreading and its superior peak-to-average power ratio (PAPR). By using the proposed scheme, the reach of a 10-GHz bandwidth QPSK modulated OFDM-PON can be extended to around 90 km. We also experimentally show that the achievable data rate of the OFDM signals can be effectively increased using the proposed scheme when adaptive bit loading is applied, depending on the transmission distance. A 10.5% and 5.2% increase in the achievable bit rate can be obtained for DSB modulated OFDM-PONs in 48.3-km and 83.2-km standard single mode fiber (SSMF) transmission cases, respectively, without any modification on the transmitter. A 40-Gb/s OFDM transmission over 83.2-km SSMF is successfully demonstrated.

  18. Systematic evaluation of fault trees using real-time model checker UPPAAL

    International Nuclear Information System (INIS)

    Cha, Sungdeok; Son, Hanseong; Yoo, Junbeom; Jee, Eunkyung; Seong, Poong Hyun

    2003-01-01

    Fault tree analysis, the most widely used safety analysis technique in industry, is often applied manually. Although techniques such as cutset analysis or probabilistic analysis can be applied on the fault tree to derive further insights, they are inadequate in locating flaws when failure modes in fault tree nodes are incorrectly identified or when causal relationships among failure modes are inaccurately specified. In this paper, we demonstrate that model checking technique is a powerful tool that can formally validate the accuracy of fault trees. We used a real-time model checker UPPAAL because the system we used as the case study, nuclear power emergency shutdown software named Wolsong SDS2, has real-time requirements. By translating functional requirements written in SCR-style tabular notation into timed automata, two types of properties were verified: (1) if failure mode described in a fault tree node is consistent with the system's behavioral model; and (2) whether or not a fault tree node has been accurately decomposed. A group of domain engineers with detailed technical knowledge of Wolsong SDS2 and safety analysis techniques developed fault tree used in the case study. However, model checking technique detected subtle ambiguities present in the fault tree

  19. Determination of the relationship between major fault and zinc mineralization using fractal modeling in the Behabad fault zone, central Iran

    Science.gov (United States)

    Adib, Ahmad; Afzal, Peyman; Mirzaei Ilani, Shapour; Aliyari, Farhang

    2017-10-01

    The aim of this study is to determine a relationship between zinc mineralization and a major fault in the Behabad area, central Iran, using the Concentration-Distance to Major Fault (C-DMF), Area of Mineralized Zone-Distance to Major Fault (AMZ-DMF), and Concentration-Area (C-A) fractal models for Zn deposit/mine classification according to their distance from the Behabad fault. Application of the C-DMF and the AMZ-DMF models for Zn mineralization classification in the Behabad fault zone reveals that the main Zn deposits have a good correlation with the major fault in the area. The distance from the known zinc deposits/mines with Zn values higher than 29% and the area of the mineralized zone of more than 900 m2 to the major fault is lower than 1 km, which shows a positive correlation between Zn mineralization and the structural zone. As a result, the AMZ-DMF and C-DMF fractal models can be utilized for the delineation and the recognition of different mineralized zones in different types of magmatic and hydrothermal deposits.

  20. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  1. Guideliness for system modeling: fault tree [analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

  2. Guideliness for system modeling: fault tree [analysis

    International Nuclear Information System (INIS)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard

  3. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  4. Neotectonics of Asia: Thin-shell finite-element models with faults

    Science.gov (United States)

    Kong, Xianghong; Bird, Peter

    1994-01-01

    As India pushed into and beneath the south margin of Asia in Cenozoic time, it added a great volume of crust, which may have been (1) emplaced locally beneath Tibet, (2) distributed as regional crustal thickening of Asia, (3) converted to mantle eclogite by high-pressure metamorphism, or (4) extruded eastward to increase the area of Asia. The amount of eastward extrusion is especially controversial: plane-stress computer models of finite strain in a continuum lithosphere show minimal escape, while laboratory and theoretical plane-strain models of finite strain in a faulted lithosphere show escape as the dominant mode. We suggest computing the present (or neo)tectonics by use of the known fault network and available data on fault activity, geodesy, and stress to select the best model. We apply a new thin-shell method which can represent a faulted lithosphere of realistic rheology on a sphere, and provided predictions of present velocities, fault slip rates, and stresses for various trial rheologies and boundary conditions. To minimize artificial boundaries, the models include all of Asia east of 40 deg E and span 100 deg on the globe. The primary unknowns are the friction coefficient of faults within Asia and the amounts of shear traction applied to Asia in the Himalayan and oceanic subduction zones at its margins. Data on Quaternary fault activity prove to be most useful in rating the models. Best results are obtained with a very low fault friction of 0.085. This major heterogeneity shows that unfaulted continum models cannot be expected to give accurate simulations of the orogeny. But, even with such weak faults, only a fraction of the internal deformation is expressed as fault slip; this means that rigid microplate models cannot represent the kinematics either. A universal feature of the better models is that eastern China and southeast Asia flow rapidly eastward with respect to Siberia. The rate of escape is very sensitive to the level of shear traction in the

  5. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than

  6. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  7. Source characterization and dynamic fault modeling of induced seismicity

    Science.gov (United States)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  8. Modeling of a Switched Reluctance Motor under Stator Winding Fault Condition

    DEFF Research Database (Denmark)

    Chen, Hao; Han, G.; Yan, Wei

    2016-01-01

    A new method for modeling stator winding fault with one shorted coil in a switched reluctance motor (SRM) is presented in this paper. The method is based on artificial neural network (ANN), incorporated with a simple analytical model in electromagnetic analysis to estimate the flux-linkage charac......A new method for modeling stator winding fault with one shorted coil in a switched reluctance motor (SRM) is presented in this paper. The method is based on artificial neural network (ANN), incorporated with a simple analytical model in electromagnetic analysis to estimate the flux......-linkage characteristics of SRM under the stator winding fault. The magnetic equivalent circuit method with ANN is applied to calculate the nonlinear flux-linkage characteristics under stator winding fault condition. A stator winding fault 12/8 SRM prototype system is developed to verify the effectiveness of the proposed...... method. The results for a stator winding fault with one shorted coil are obtained from the proposed method and from the experimental work on a developed prototype. It is shown that the simulation results are close to the test results....

  9. A way to synchronize models with seismic faults for earthquake forecasting

    DEFF Research Database (Denmark)

    González, Á.; Gómez, J.B.; Vázquez-Prada, M.

    2006-01-01

    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual....... Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault...... models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized...

  10. Fuzzy delay model based fault simulator for crosstalk delay fault test ...

    Indian Academy of Sciences (India)

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design .... To find the quality of non-robust tests, a fuzzy delay ..... Dubois D and Prade H 1989 Processing Fuzzy temporal knowledge. IEEE Transactions ...

  11. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  12. Natural Environment Modeling and Fault-Diagnosis for Automated Agricultural Vehicle

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2008-01-01

    This paper presents results for an automatic navigation system for agricultural vehicles. The system uses stereo-vision, inertial sensors and GPS. Special emphasis has been placed on modeling the natural environment in conjunction with a fault-tolerant navigation system. The results are exemplified...... by an agricultural vehicle following cut grass (swath). It is demonstrated how faults in the system can be detected and diagnosed using state of the art techniques from fault-tolerant literature. Results in performing fault-diagnosis and fault accomodation are presented using real data....

  13. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  14. Insights in Fault Flow Behaviour from Onshore Nigeria Petroleum System Modelling

    Directory of Open Access Journals (Sweden)

    Woillez Marie-Noëlle

    2017-09-01

    Full Text Available Faults are complex geological features acting either as permeability barrier, baffle or drain to fluid flow in sedimentary basins. Their role can be crucial for over-pressure building and hydrocarbon migration, therefore they have to be properly integrated in basin modelling. The ArcTem basin simulator included in the TemisFlow software has been specifically designed to improve the modelling of faulted geological settings and to get a numerical representation of fault zones closer to the geological description. Here we present new developments in the simulator to compute fault properties through time as a function of available geological parameters, for single-phase 2D simulations. We have used this new prototype to model pressure evolution on a siliciclastic 2D section located onshore in the Niger Delta. The section is crossed by several normal growth faults which subdivide the basin into several sedimentary units and appear to be lateral limits of strong over-pressured zones. Faults are also thought to play a crucial role in hydrocarbons migration from the deep source rocks to shallow reservoirs. We automatically compute the Shale Gouge Ratio (SGR along the fault planes through time, as well as the fault displacement velocity. The fault core permeability is then computed as a function of the SGR, including threshold values to account for shale smear formation. Longitudinal fault fluid flow is enhanced during periods of high fault slip velocity. The method allows us to simulate both along-fault drainages during the basin history as well as overpressure building at present-day. The simulated pressures are at first order within the range of observed pressures we had at our disposal.

  15. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  16. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    Science.gov (United States)

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  17. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    Directory of Open Access Journals (Sweden)

    Yaodong Xing

    2012-08-01

    Full Text Available Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can’t be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  18. Using Magnetics and Topography to Model Fault Splays of the Hilton Creek Fault System within the Long Valley Caldera

    Science.gov (United States)

    De Cristofaro, J. L.; Polet, J.

    2017-12-01

    The Hilton Creek Fault (HCF) is a range-bounding extensional fault that forms the eastern escarpment of California's Sierra Nevada mountain range, near the town of Mammoth Lakes. The fault is well mapped along its main trace to the south of the Long Valley Caldera (LVC), but the location and nature of its northern terminus is poorly constrained. The fault terminates as a series of left-stepping splays within the LVC, an area of active volcanism that most notably erupted 760 ka, and currently experiences continuous geothermal activity and sporadic earthquake swarms. The timing of the most recent motion on these fault splays is debated, as is the threat posed by this section of the Hilton Creek Fault. The Third Uniform California Earthquake Rupture Forecast (UCERF3) model depicts the HCF as a single strand projecting up to 12km into the LVC. However, Bailey (1989) and Hill and Montgomery-Brown (2015) have argued against this model, suggesting that extensional faulting within the Caldera has been accommodated by the ongoing volcanic uplift and thus the intracaldera section of the HCF has not experienced motion since 760ka.We intend to map the intracaldera fault splays and model their subsurface characteristics to better assess their rupture history and potential. This will be accomplished using high-resolution topography and subsurface geophysical methods, including ground-based magnetics. Preliminary work was performed using high-precision Nikon Nivo 5.C total stations to generate elevation profiles and a backpack mounted GEM GS-19 proton precession magnetometer. The initial results reveal a correlation between magnetic anomalies and topography. East-West topographic profiles show terrace-like steps, sub-meter in height, which correlate to changes in the magnetic data. Continued study of the magnetic data using Oasis Montaj 3D modeling software is planned. Additionally, we intend to prepare a high-resolution terrain model using structure-from-motion techniques

  19. Open-Switch Fault Diagnosis and Fault Tolerant for Matrix Converter with Finite Control Set-Model Predictive Control

    DEFF Research Database (Denmark)

    Peng, Tao; Dan, Hanbing; Yang, Jian

    2016-01-01

    To improve the reliability of the matrix converter (MC), a fault diagnosis method to identify single open-switch fault is proposed in this paper. The introduced fault diagnosis method is based on finite control set-model predictive control (FCS-MPC), which employs a time-discrete model of the MC...... topology and a cost function to select the best switching state for the next sampling period. The proposed fault diagnosis method is realized by monitoring the load currents and judging the switching state to locate the faulty switch. Compared to the conventional modulation strategies such as carrier......-based modulation method, indirect space vector modulation and optimum Alesina-Venturini, the FCS-MPC has known and unchanged switching state in a sampling period. It is simpler to diagnose the exact location of the open switch in MC with FCS-MPC. To achieve better quality of the output current under single open...

  20. Modeling of the fault-controlled hydrothermal ore-forming systems

    International Nuclear Information System (INIS)

    Pek, A.A.; Malkovsky, V.I.

    1993-07-01

    A necessary precondition for the formation of hydrothermal ore deposits is a strong focusing of hydrothermal flow as fluids move from the fluid source to the site of ore deposition. The spatial distribution of hydrothermal deposits favors the concept that such fluid flow focusing is controlled, for the most part, by regional faults which provide a low resistance path for hydrothermal solutions. Results of electric analog simulations, analytical solutions, and computer simulations of the fluid flow, in a fault-controlled single-pass advective system, confirm this concept. The influence of the fluid flow focusing on the heat and mass transfer in a single-pass advective system was investigated for a simplified version of the metamorphic model for the genesis of greenstone-hosted gold deposits. The spatial distribution of ore mineralization, predicted by computer simulation, is in reasonable agreement with geological observations. Computer simulations of the fault-controlled thermoconvective system revealed a complex pattern of mixing hydrothermal solutions in the model, which also simulates the development of the modern hydrothermal systems on the ocean floor. The specific feature of the model considered, is the development under certain conditions of an intra-fault convective cell that operates essentially independently of the large scale circulation. These and other results obtained during the study indicate that modeling of natural fault-controlled hydrothermal systems is instructive for the analysis of transport processes in man-made hydrothermal systems that could develop in geologic high-level nuclear waste repositories

  1. Model Based Fault Detection in a Centrifugal Pump Application

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...

  2. Selecting the optimal anti-aliasing filter for multichannel biosignal acquisition intended for inter-signal phase shift analysis

    International Nuclear Information System (INIS)

    Keresnyei, Róbert; Hejjel, László; Megyeri, Péter; Zidarics, Zoltán

    2015-01-01

    The availability of microcomputer-based portable devices facilitates the high-volume multichannel biosignal acquisition and the analysis of their instantaneous oscillations and inter-signal temporal correlations. These new, non-invasively obtained parameters can have considerable prognostic or diagnostic roles. The present study investigates the inherent signal delay of the obligatory anti-aliasing filters. One cycle of each of the 8 electrocardiogram (ECG) and 4 photoplethysmogram signals from healthy volunteers or artificially synthesised series were passed through 100–80–60–40–20 Hz 2–4–6–8th order Bessel and Butterworth filters digitally synthesized by bilinear transformation, that resulted in a negligible error in signal delay compared to the mathematical model of the impulse- and step responses of the filters. The investigated filters have as diverse a signal delay as 2–46 ms depending on the filter parameters and the signal slew rate, which is difficult to predict in biological systems and thus difficult to compensate for. Its magnitude can be comparable to the examined phase shifts, deteriorating the accuracy of the measurement. As a conclusion, identical or very similar anti-aliasing filters with lower orders and higher corner frequencies, oversampling, and digital low pass filtering are recommended for biosignal acquisition intended for inter-signal phase shift analysis. (note)

  3. Investigation of faulted tunnel models by combined photoelasticity and finite element analysis

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Huang, Yuping

    1994-01-01

    Models of square and circular tunnels with short faults cutting through their surfaces are investigated by photoelasticity. These models, when duplicated by finite element analysis can predict the stress states of square or circular faulted tunnels adequately. Finite element analysis, using gap elements, may be used to investigate full size faulted tunnel system

  4. Phase response curves for models of earthquake fault dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Franović, Igor, E-mail: franovic@ipb.ac.rs [Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Kostić, Srdjan [Institute for the Development of Water Resources “Jaroslav Černi,” Jaroslava Černog 80, 11226 Belgrade (Serbia); Perc, Matjaž [Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, SI-2000 Maribor (Slovenia); CAMTP—Center for Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Klinshov, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Nekorkin, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); University of Nizhny Novgorod, 23 Prospekt Gagarina, 603950 Nizhny Novgorod (Russian Federation); Kurths, Jürgen [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)

    2016-06-15

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  5. Phase response curves for models of earthquake fault dynamics

    International Nuclear Information System (INIS)

    Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-01-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  6. Model-Based Fault Diagnosis in Electric Drive Inverters Using Artificial Neural Network

    National Research Council Canada - National Science Library

    Masrur, Abul; Chen, ZhiHang; Zhang, Baifang; Jia, Hongbin; Murphey, Yi-Lu

    2006-01-01

    .... A normal model and various faulted models of the inverter-motor combination were developed, and voltages and current signals were generated from those models to train an artificial neural network for fault diagnosis...

  7. Observer and data-driven model based fault detection in Power Plant Coal Mills

    DEFF Research Database (Denmark)

    Fogh Odgaard, Peter; Lin, Bao; Jørgensen, Sten Bay

    2008-01-01

    model with motor power as the controlled variable, data-driven methods for fault detection are also investigated. Regression models that represent normal operating conditions (NOCs) are developed with both static and dynamic principal component analysis and partial least squares methods. The residual...... between process measurement and the NOC model prediction is used for fault detection. A hybrid approach, where a data-driven model is employed to derive an optimal unknown input observer, is also implemented. The three methods are evaluated with case studies on coal mill data, which includes a fault......This paper presents and compares model-based and data-driven fault detection approaches for coal mill systems. The first approach detects faults with an optimal unknown input observer developed from a simplified energy balance model. Due to the time-consuming effort in developing a first principles...

  8. Dynamics Modeling and Analysis of Local Fault of Rolling Element Bearing

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2015-01-01

    Full Text Available This paper presents a nonlinear vibration model of rolling element bearings with 5 degrees of freedom based on Hertz contact theory and relevant bearing knowledge of kinematics and dynamics. The slipping of ball, oil film stiffness, and the nonlinear time-varying stiffness of the bearing are taken into consideration in the model proposed here. The single-point local fault model of rolling element bearing is introduced into the nonlinear model with 5 degrees of freedom according to the loss of the contact deformation of ball when it rolls into and out of the local fault location. The functions of spall depth corresponding to defects of different shapes are discussed separately in this paper. Then the ode solver in Matlab is adopted to perform a numerical solution on the nonlinear vibration model to simulate the vibration response of the rolling elements bearings with local fault. The simulation signals analysis results show a similar behavior and pattern to that observed in the processed experimental signals of rolling element bearings in both time domain and frequency domain which validated the nonlinear vibration model proposed here to generate typical rolling element bearings local fault signals for possible and effective fault diagnostic algorithms research.

  9. Workflow Fault Tree Generation Through Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2014-01-01

    We present a framework for the automated generation of fault trees from models of realworld process workflows, expressed in a formalised subset of the popular Business Process Modelling and Notation (BPMN) language. To capture uncertainty and unreliability in workflows, we extend this formalism...

  10. Certain Type Turbofan Engine Whole Vibration Model with Support Looseness Fault and Casing Response Characteristics

    Directory of Open Access Journals (Sweden)

    H. F. Wang

    2014-01-01

    Full Text Available Support looseness fault is a type of common fault in aeroengine. Serious looseness fault would emerge under larger unbalanced force, which would cause excessive vibration and even lead to rubbing fault, so it is important to analyze and recognize looseness fault effectively. In this paper, based on certain type turbofan engine structural features, a rotor-support-casing whole model for certain type turbofan aeroengine is established. The rotor and casing systems are modeled by means of the finite element beam method; the support systems are modeled by lumped-mass model; the support looseness fault model is also introduced. The coupled system response is obtained by numerical integral method. In this paper, based on the casing acceleration signals, the impact characteristics of symmetrical stiffness and asymmetric stiffness models are analyzed, finding that the looseness fault would lead to the longitudinal asymmetrical characteristics of acceleration time domain wave and the multiple frequency characteristics, which is consistent with the real trial running vibration signals. Asymmetric stiffness looseness model is verified to be fit for aeroengine looseness fault model.

  11. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi

    2018-02-12

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a statistical approach. Specifically, a simulation model that mimics the theoretical performances of the inspected PV system is designed. Residuals, which are the difference between the measured and estimated output data, are used as a fault indicator. Indeed, residuals are used as the input for the Multivariate CUmulative SUM (MCUSUM) algorithm to detect potential faults. We evaluated the proposed method by using data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  12. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Lindhe, Andreas; Norberg, Tommy; Rosén, Lars

    2012-01-01

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  13. Experimental Modeling of Dynamic Shallow Dip-Slip Faulting

    Science.gov (United States)

    Uenishi, K.

    2010-12-01

    In our earlier study (AGU 2005, SSJ 2005, JPGU 2006), using a finite difference technique, we have conducted some numerical simulations related to the source dynamics of shallow dip-slip earthquakes, and suggested the possibility of the existence of corner waves, i.e., shear waves that carry concentrated kinematic energy and generate extremely strong particle motions on the hanging wall of a nonvertical fault. In the numerical models, a dip-slip fault is located in a two-dimensional, monolithic linear elastic half space, and the fault plane dips either vertically or 45 degrees. We have investigated the seismic wave field radiated by crack-like rupture of this straight fault. If the fault rupture, initiated at depth, arrests just below or reaches the free surface, four Rayleigh-type pulses are generated: two propagating along the free surface into the opposite directions to the far field, the other two moving back along the ruptured fault surface (interface) downwards into depth. These downward interface pulses may largely control the stopping phase of the dynamic rupture, and in the case the fault plane is inclined, on the hanging wall the interface pulse and the outward-moving Rayleigh surface pulse interact with each other and the corner wave is induced. On the footwall, the ground motion is dominated simply by the weaker Rayleigh pulse propagating along the free surface because of much smaller interaction between this Rayleigh and the interface pulse. The generation of the downward interface pulses and corner wave may play a crucial role in understanding the effects of the geometrical asymmetry on the strong motion induced by shallow dip-slip faulting, but it has not been well recognized so far, partly because those waves are not expected for a fault that is located and ruptures only at depth. However, the seismological recordings of the 1999 Chi-Chi, Taiwan, the 2004 Niigata-ken Chuetsu, Japan, earthquakes as well as a more recent one in Iwate-Miyagi Inland

  14. Fault Rupture Model of the 2016 Gyeongju, South Korea, Earthquake and Its Implication for the Underground Fault System

    Science.gov (United States)

    Uchide, Takahiko; Song, Seok Goo

    2018-03-01

    The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.

  15. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  16. Ductile bookshelf faulting: A new kinematic model for Cenozoic deformation in northern Tibet

    Science.gov (United States)

    Zuza, A. V.; Yin, A.

    2013-12-01

    It has been long recognized that the most dominant features on the northern Tibetan Plateau are the >1000 km left-slip strike-slip faults (e.g., the Atyn Tagh, Kunlun, and Haiyuan faults). Early workers used the presence of these faults, especially the Kunlun and Haiyuan faults, as evidence for eastward lateral extrusion of the plateau, but their low documented offsets--100s of km or less--can not account for the 2500 km of convergence between India and Asia. Instead, these faults may result from north-south right-lateral simple shear due to the northward indentation of India, which leads to the clockwise rotation of the strike-slip faults and left-lateral slip (i.e., bookshelf faulting). With this idea, deformation is still localized on discrete fault planes, and 'microplates' or blocks rotate and/or translate with little internal deformation. As significant internal deformation occurs across northern Tibet within strike-slip-bounded domains, there is need for a coherent model to describe all of the deformational features. We also note the following: (1) geologic offsets and Quaternary slip rates of both the Kunlun and Haiyuan faults vary along strike and appear to diminish to the east, (2) the faults appear to kinematically link with thrust belts (e.g., Qilian Shan, Liupan Shan, Longmen Shan, and Qimen Tagh) and extensional zones (e.g., Shanxi, Yinchuan, and Qinling grabens), and (3) temporal relationships between the major deformation zones and the strike-slip faults (e.g., simultaneous enhanced deformation and offset in the Qilian Shan and Liupan Shan, and the Haiyuan fault, at 8 Ma). We propose a new kinematic model to describe the active deformation in northern Tibet: a ductile-bookshelf-faulting model. With this model, right-lateral simple shear leads to clockwise vertical axis rotation of the Qaidam and Qilian blocks, and left-slip faulting. This motion creates regions of compression and extension, dependent on the local boundary conditions (e.g., rigid

  17. 3D Strain Modelling of Tear Fault Analogues

    Science.gov (United States)

    Hindle, D.; Vietor, T.

    2005-12-01

    Tear faults can be described as vertical discontinuities, with near fault parallel displacements terminating on some sort of shallow detachment. As such, they are difficult to study in "cross section" i.e. 2 dimensions as is often the case for fold-thrust systems. Hence, little attempt has been made to model the evolution of strain around tear faults and the processes of strain localisation in such structures due to the necessity of describing these systems in 3 dimensions and the problems this poses for both numerical and analogue modelling. Field studies suggest that strain in such regions can be distributed across broad zones on minor tear systems, which are often not easily mappable. Such strain is probably assumed to be due to distributed strain and to displacement gradients which are themselves necessary for the initiation of the tear itself. We present a numerical study of the effects of a sharp, basal discontinutiy parallel to the transport direction in a shortening wedge of material. The discontinuity is represented by two adjacent basal surfaces with strongly contrasting (0.5 and 0.05) friction coefficient. The material is modelled using PFC3D distinct element software for simulating granular material, whose properties are chosen to simulate upper crustal, sedimentary rock. The model geometry is a rectangular bounding box, 2km x 1km, and 0.35-0.5km deep, with a single, driving wall of constant velocity. We show the evolution of strain in the model in horizontal and vertical sections, and interpret strain localization as showing the spontaneous development of tear fault like features. The strain field in the model is asymmetrical, rotated towards the strong side of the model. Strain increments seem to oscillate in time, suggesting achievement of a steady state. We also note that our model cannot be treated as a critical wedge, since the 3rd dimension and the lateral variations of strength rule out this type of 2D approximation.

  18. Dynamic Models of Earthquake Rupture along branch faults of the Eastern San Gorgonio Pass Region in CA using Complex Fault Structure

    Science.gov (United States)

    Douilly, R.; Oglesby, D. D.; Cooke, M. L.; Beyer, J. L.

    2017-12-01

    Compilation of geomorphic and paleoseismic data have illustrated that the right-lateral Coachella segment of the southern San Andreas Fault is past its average recurrence time period. On its western edge, this fault segment is split into two branches: the Mission Creek strand, and the Banning fault strand, of the San Andreas. Depending on how rupture propagates through this region, there is the possibility of a through-going rupture that could lead to the channeling of damaging seismic energy into the Los Angeles Basin. The fault structures and rupture scenarios on these two strands are potentially very different, so it is important to determine which strand is a more likely rupture path, and under which circumstances rupture will take either one. In this study, we focus on the effect of different assumptions about fault geometry and stress pattern on the rupture process to test those scenarios and thus investigate the most likely path of a rupture that starts on the Coachella segment. We consider two types of fault geometry based on the SCEC Community Fault Model and create a 3D finite element mesh. These two meshes are then incorporated into the finite element method code FaultMod to compute a physical model for the rupture dynamics. We use the slip-weakening friction law, and we consider different assumptions of background stress such as constant tractions, regional stress regimes of different orientations, heterogeneous off-fault stresses and the results of long-term stressing rates from quasi-static crustal deformation models that consider time since last event on each fault segment. Both the constant and regional stress distribution show that it is more likely for the rupture to branch from the Coachella segment to the Mission Creek compared to the Banning fault segment. For the regional stress distribution, we encounter cases of super-shear rupture for one type of fault geometry and sub-shear rupture for the other one. The fault connectivity at this branch

  19. Fuzzy model-based observers for fault detection in CSTR.

    Science.gov (United States)

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  1. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  2. Finite element models of earthquake cycles in mature strike-slip fault zones

    Science.gov (United States)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a

  3. Research on Fault Diagnosis of HTR-PM Based on Multilevel Flow Model

    International Nuclear Information System (INIS)

    Zhang Yong; Zhou Yangping

    2014-01-01

    In this paper, we focus on the application of Multilevel Flow Model (MFM) in the automatic real-time fault diagnosis of High Temperature Gas-cooled Reactor Pebble-bed Module (HTR-PM) accidents. In the MFM, the plant process is described abstractly in function level by mass, energy and information flows, which reveal the interaction between different components and capacitate the causal reasoning between functions according to the flow properties. Thus, in the abnormal status, a goal-function-component oriented fault diagnosis can be performed with the model at a very quick speed and abnormal alarms can be also precisely explained by the reasoning relationship of the model. By using MFM, a fault diagnosis model of HTR-PM plant is built, and the detailed process of fault diagnosis is also shown by the flowcharts. Due to lack of simulation data about HTR-PM, experiments are not conducted to evaluate the fault diagnosis performance, but analysis of algorithm feasibility and complexity shows that the diagnosis system will have a good ability to detect and diagnosis accidents timely. (author)

  4. A Model of Intelligent Fault Diagnosis of Power Equipment Based on CBR

    Directory of Open Access Journals (Sweden)

    Gang Ma

    2015-01-01

    Full Text Available Nowadays the demand of power supply reliability has been strongly increased as the development within power industry grows rapidly. Nevertheless such large demand requires substantial power grid to sustain. Therefore power equipment’s running and testing data which contains vast information underpins online monitoring and fault diagnosis to finally achieve state maintenance. In this paper, an intelligent fault diagnosis model for power equipment based on case-based reasoning (IFDCBR will be proposed. The model intends to discover the potential rules of equipment fault by data mining. The intelligent model constructs a condition case base of equipment by analyzing the following four categories of data: online recording data, history data, basic test data, and environmental data. SVM regression analysis was also applied in mining the case base so as to further establish the equipment condition fingerprint. The running data of equipment can be diagnosed by such condition fingerprint to detect whether there is a fault or not. Finally, this paper verifies the intelligent model and three-ratio method based on a set of practical data. The resulting research demonstrates that this intelligent model is more effective and accurate in fault diagnosis.

  5. Vibration model of rolling element bearings in a rotor-bearing system for fault diagnosis

    Science.gov (United States)

    Cong, Feiyun; Chen, Jin; Dong, Guangming; Pecht, Michael

    2013-04-01

    Rolling element bearing faults are among the main causes of breakdown in rotating machines. In this paper, a rolling bearing fault model is proposed based on the dynamic load analysis of a rotor-bearing system. The rotor impact factor is taken into consideration in the rolling bearing fault signal model. The defect load on the surface of the bearing is divided into two parts, the alternate load and the determinate load. The vibration response of the proposed fault signal model is investigated and the fault signal calculating equation is derived through dynamic and kinematic analysis. Outer race and inner race fault simulations are realized in the paper. The simulation process includes consideration of several parameters, such as the gravity of the rotor-bearing system, the imbalance of the rotor, and the location of the defect on the surface. The simulation results show that different amplitude contributions of the alternate load and determinate load will cause different envelope spectrum expressions. The rotating frequency sidebands will occur in the envelope spectrum in addition to the fault characteristic frequency. This appearance of sidebands will increase the difficulty of fault recognition in intelligent fault diagnosis. The experiments given in the paper have successfully verified the proposed signal model simulation results. The test rig design of the rotor bearing system simulated several operating conditions: (1) rotor bearing only; (2) rotor bearing with loader added; (3) rotor bearing with loader and rotor disk; and (4) bearing fault simulation without rotor influence. The results of the experiments have verified that the proposed rolling bearing signal model is important to the rolling bearing fault diagnosis of rotor-bearing systems.

  6. Systems analysis approach to probabilistic modeling of fault trees

    International Nuclear Information System (INIS)

    Bartholomew, R.J.; Qualls, C.R.

    1985-01-01

    A method of probabilistic modeling of fault tree logic combined with stochastic process theory (Markov modeling) has been developed. Systems are then quantitatively analyzed probabilistically in terms of their failure mechanisms including common cause/common mode effects and time dependent failure and/or repair rate effects that include synergistic and propagational mechanisms. The modeling procedure results in a state vector set of first order, linear, inhomogeneous, differential equations describing the time dependent probabilities of failure described by the fault tree. The solutions of this Failure Mode State Variable (FMSV) model are cumulative probability distribution functions of the system. A method of appropriate synthesis of subsystems to form larger systems is developed and applied to practical nuclear power safety systems

  7. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    Science.gov (United States)

    Thomas, M. Y.; Bhat, H. S.

    2017-12-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  8. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    Science.gov (United States)

    Thomas, Marion Y.; Bhat, Harsha S.

    2018-05-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  9. Transposing an active fault database into a seismic hazard fault model for nuclear facilities. Pt. 1. Building a database of potentially active faults (BDFA) for metropolitan France

    Energy Technology Data Exchange (ETDEWEB)

    Jomard, Herve; Cushing, Edward Marc; Baize, Stephane; Chartier, Thomas [IRSN - Institute of Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); Palumbo, Luigi; David, Claire [Neodyme, Joue les Tours (France)

    2017-07-01

    The French Institute for Radiation Protection and Nuclear Safety (IRSN), with the support of the Ministry of Environment, compiled a database (BDFA) to define and characterize known potentially active faults of metropolitan France. The general structure of BDFA is presented in this paper. BDFA reports to date 136 faults and represents a first step toward the implementation of seismic source models that would be used for both deterministic and probabilistic seismic hazard calculations. A robustness index was introduced, highlighting that less than 15% of the database is controlled by reasonably complete data sets. An example of transposing BDFA into a fault source model for PSHA (probabilistic seismic hazard analysis) calculation is presented for the Upper Rhine Graben (eastern France) and exploited in the companion paper (Chartier et al., 2017, hereafter Part 2) in order to illustrate ongoing challenges for probabilistic fault-based seismic hazard calculations.

  10. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  11. Modeling, control and fault diagnosis of an isolated wind energy conversion system with a self-excited induction generator subject to electrical faults

    International Nuclear Information System (INIS)

    Attoui, Issam; Omeiri, Amar

    2014-01-01

    Highlights: • A new model of the SEIG is developed to simulate both the rotor and stator faults. • This model takes iron loss, main flux and cross flux saturation into account. • A new control strategy based on Fractional-Order Controller (FOC) is proposed. • The control strategy is developed for the control of the wind turbine speed. • An on-line diagnostic procedure based on the stator currents analysis is presented. - Abstract: In this paper, a contribution to modeling and fault diagnosis of rotor and stator faults of a Self-Excited Induction Generator (SEIG) in an Isolated Wind Energy Conversion System (IWECS) is proposed. In order to control the speed of the wind turbine, while basing on the linear model of wind turbine system about a specified operating point, a new Fractional-Order Controller (FOC) with a simple and practical design method is proposed. The FOC ensures the stability of the nonlinear system in both healthy and faulty conditions. Furthermore, in order to detect the stator and rotor faults in the squirrel-cage self-excited induction generator, an on-line fault diagnostic technique based on the spectral analysis of stator currents of the squirrel-cage SEIG by a Fast Fourier Transform (FFT) algorithm is used. Additionally, a generalized model of the squirrel-cage SEIG is developed to simulate both the rotor and stator faults taking iron loss, main flux and cross flux saturation into account. The efficiencies of generalized model, control strategy and diagnostic procedure are illustrated with simulation results

  12. Analysis of Fault Spacing in Thrust-Belt Wedges Using Numerical Modeling

    Science.gov (United States)

    Regensburger, P. V.; Ito, G.

    2017-12-01

    Numerical modeling is invaluable in studying the mechanical processes governing the evolution of geologic features such as thrust-belt wedges. The mechanisms controlling thrust fault spacing in wedges is not well understood. Our numerical model treats the thrust belt as a visco-elastic-plastic continuum and uses a finite-difference, marker-in-cell method to solve for conservation of mass and momentum. From these conservation laws, stress is calculated and Byerlee's law is used to determine the shear stress required for a fault to form. Each model consists of a layer of crust, initially 3-km-thick, carried on top of a basal décollement, which moves at a constant speed towards a rigid backstop. A series of models were run with varied material properties, focusing on the angle of basal friction at the décollement, the angle of friction within the crust, and the cohesion of the crust. We investigate how these properties affected the spacing between thrusts that have the most time-integrated history of slip and therefore have the greatest effect on the large-scale undulations in surface topography. The surface position of these faults, which extend through most of the crustal layer, are identifiable as local maxima in positive curvature of surface topography. Tracking the temporal evolution of faults, we find that thrust blocks are widest when they first form at the front of the wedge and then they tend to contract over time as more crustal material is carried to the wedge. Within each model, thrust blocks form with similar initial widths, but individual thrust blocks develop differently and may approach an asymptotic width over time. The median of thrust block widths across the whole wedge tends to decrease with time. Median fault spacing shows a positive correlation with both wedge cohesion and internal friction. In contrast, median fault spacing exhibits a negative correlation at small angles of basal friction (laws that can be used to predict fault spacing in

  13. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    Science.gov (United States)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  14. Fault Modeling and Testing for Analog Circuits in Complex Space Based on Supply Current and Output Voltage

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available This paper deals with the modeling of fault for analog circuits. A two-dimensional (2D fault model is first proposed based on collaborative analysis of supply current and output voltage. This model is a family of circle loci on the complex plane, and it simplifies greatly the algorithms for test point selection and potential fault simulations, which are primary difficulties in fault diagnosis of analog circuits. Furthermore, in order to reduce the difficulty of fault location, an improved fault model in three-dimensional (3D complex space is proposed, which achieves a far better fault detection ratio (FDR against measurement error and parametric tolerance. To address the problem of fault masking in both 2D and 3D fault models, this paper proposes an effective design for testability (DFT method. By adding redundant bypassing-components in the circuit under test (CUT, this method achieves excellent fault isolation ratio (FIR in ambiguity group isolation. The efficacy of the proposed model and testing method is validated through experimental results provided in this paper.

  15. Fault Tolerant Control Using Gaussian Processes and Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Yang Xiaoke

    2015-03-01

    Full Text Available Essential ingredients for fault-tolerant control are the ability to represent system behaviour following the occurrence of a fault, and the ability to exploit this representation for deciding control actions. Gaussian processes seem to be very promising candidates for the first of these, and model predictive control has a proven capability for the second. We therefore propose to use the two together to obtain fault-tolerant control functionality. Our proposal is illustrated by several reasonably realistic examples drawn from flight control.

  16. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

  17. Dynamic rupture models of subduction zone earthquakes with off-fault plasticity

    Science.gov (United States)

    Wollherr, S.; van Zelst, I.; Gabriel, A. A.; van Dinther, Y.; Madden, E. H.; Ulrich, T.

    2017-12-01

    Modeling tsunami-genesis based on purely elastic seafloor displacement typically underpredicts tsunami sizes. Dynamic rupture simulations allow to analyse whether plastic energy dissipation is a missing rheological component by capturing the complex interplay of the rupture front, emitted seismic waves and the free surface in the accretionary prism. Strike-slip models with off-fault plasticity suggest decreasing rupture speed and extensive plastic yielding mainly at shallow depths. For simplified subduction geometries inelastic deformation on the verge of Coulomb failure may enhance vertical displacement, which in turn favors the generation of large tsunamis (Ma, 2012). However, constraining appropriate initial conditions in terms of fault geometry, initial fault stress and strength remains challenging. Here, we present dynamic rupture models of subduction zones constrained by long-term seismo-thermo-mechanical modeling (STM) without any a priori assumption of regions of failure. The STM model provides self-consistent slab geometries, as well as stress and strength initial conditions which evolve in response to tectonic stresses, temperature, gravity, plasticity and pressure (van Dinther et al. 2013). Coseismic slip and coupled seismic wave propagation is modelled using the software package SeisSol (www.seissol.org), suited for complex fault zone structures and topography/bathymetry. SeisSol allows for local time-stepping, which drastically reduces the time-to-solution (Uphoff et al., 2017). This is particularly important in large-scale scenarios resolving small-scale features, such as the shallow angle between the megathrust fault and the free surface. Our dynamic rupture model uses a Drucker-Prager plastic yield criterion and accounts for thermal pressurization around the fault mimicking the effect of pore pressure changes due to frictional heating. We first analyze the influence of this rheology on rupture dynamics and tsunamigenic properties, i.e. seafloor

  18. Stochastic Modeling and Simulation of Near-Fault Ground Motions for Performance-Based Earthquake Engineering

    OpenAIRE

    Dabaghi, Mayssa

    2014-01-01

    A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as ...

  19. Modeling and Simulation of Transient Fault Response at Lillgrund Wind Farm when Subjected to Faults in the Connecting 130 kV Grid

    Energy Technology Data Exchange (ETDEWEB)

    Eliasson, Anders; Isabegovic, Emir

    2009-07-01

    The purpose of this thesis was to investigate what type of faults in the connecting grid should be dimensioning for future wind farms. An investigation of over and under voltages at the main transformer and the turbines inside Lillgrund wind farm was the main goal. The results will be used in the planning stage of future wind farms when performing insulation coordination and determining the protection settings. A model of the Lillgrund wind farm and a part of the connecting 130 kV grid were built in PSCAD/EMTDC. The farm consists of 48 Siemens SWT-2.3-93 2.3 MW wind turbines with full power converters. The turbines were modeled as controllable current sources providing a constant active power output up to the current limit of 1.4 pu. The transmission lines and cables were modeled as frequency dependent (phase) models. The load flows and bus voltages were verified towards a PSS/E model and the transient response was verified towards measuring data from two faults, a line to line fault in the vicinity of Barsebaeck (BBK) and a single line-to-ground fault close to Bunkeflo (BFO) substation. For the simulation, three phase to ground, single line to ground and line to line faults were applied at different locations in the connecting grid and the phase to ground voltages at different buses in the connecting grid and at turbines were studied. These faults were applied for different configurations of the farm. For single line to ground faults, the highest over voltage on a turbine was 1.22 pu (32.87 kV) due to clearing of a fault at BFO (the PCC). For line to line faults, the highest over voltage on a turbine was 1.59 pu (42.83 kV) at the beginning of a fault at KGE one bus away from BFO. Both these cases were when all radials were connected and the turbines ran at full power. The highest over voltage observed at Lillgrund was 1.65 pu (44.45 kV). This over voltage was caused by a three phase to ground fault applied at KGE and occurred at the beginning of the fault and when

  20. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  1. Bond graphs for modelling, control and fault diagnosis of engineering systems

    CERN Document Server

    2017-01-01

    This book presents theory and latest application work in Bond Graph methodology with a focus on: • Hybrid dynamical system models, • Model-based fault diagnosis, model-based fault tolerant control, fault prognosis • and also addresses • Open thermodynamic systems with compressible fluid flow, • Distributed parameter models of mechanical subsystems. In addition, the book covers various applications of current interest ranging from motorised wheelchairs, in-vivo surgery robots, walking machines to wind-turbines.The up-to-date presentation has been made possible by experts who are active members of the worldwide bond graph modelling community. This book is the completely revised 2nd edition of the 2011 Springer compilation text titled Bond Graph Modelling of Engineering Systems – Theory, Applications and Software Support. It extends the presentation of theory and applications of graph methodology by new developments and latest research results. Like the first edition, this book addresses readers in a...

  2. Logical Specification and Analysis of Fault Tolerant Systems through Partial Model Checking

    NARCIS (Netherlands)

    Gnesi, S.; Etalle, Sandro; Mukhopadhyay, S.; Lenzini, Gabriele; Lenzini, G.; Martinelli, F.; Roychoudhury, A.

    2003-01-01

    This paper presents a framework for a logical characterisation of fault tolerance and its formal analysis based on partial model checking techniques. The framework requires a fault tolerant system to be modelled using a formal calculus, here the CCS process algebra. To this aim we propose a uniform

  3. Model based Fault Detection and Isolation for Driving Motors of a Ground Vehicle

    Directory of Open Access Journals (Sweden)

    Young-Joon Kim

    2016-04-01

    Full Text Available This paper proposes model based current sensor and position sensor fault detection and isolation algorithm for driving motor of In-wheel independent drive electric vehicle. From low level perspective, fault diagnosis conducted and analyzed to enhance robustness and stability. Composing state equation of interior permanent magnet synchronous motor (IPMSM, current sensor fault and position sensor fault diagnosed with parity equation. Validation and usefulness of algorithm confirmed based on IPMSM fault occurrence simulation data.

  4. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  5. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After

  6. A rheologically layered three-dimensional model of the San Andreas fault in central and southern California

    Science.gov (United States)

    Williams, Charles A.; Richardson, Randall M.

    1991-01-01

    The effects of rheological parameters and the fault slip distribution on the horizontal and vertical deformation in the vicinity of the fault are investigated using 3D kinematic finite element models of the San Andreas fault in central and southern California. It is shown that fault models with different rheological stratification schemes and slip distributions predict characteristic deformation patterns. Models that do not include aseismic slip below the fault locking depth predict deformation patterns that are strongly dependent on time since the last earthquake, while models that incorporate the aseismic slip below the locking depth depend on time to a significantly lesser degree.

  7. Electric machines modeling, condition monitoring, and fault diagnosis

    CERN Document Server

    Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun

    2012-01-01

    With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi

  8. Aliasing of the Schumann resonance background signal by sprite-associated Q-bursts

    Science.gov (United States)

    Guha, Anirban; Williams, Earle; Boldi, Robert; Sátori, Gabriella; Nagy, Tamás; Bór, József; Montanyà, Joan; Ortega, Pascal

    2017-12-01

    spectral aliasing can occur even when 12-min spectral integrations are considered. The statistical result shows that for a 12-min spectrum, events above 16 CSD are capable of producing significant frequency aliasing of the modal frequencies, although the intensity aliasing might have a negligible effect unless the events are exceptionally large (∼200 CSD). The spectral CSD methodology may be used to extract the time of arrival of the Q-burst transients. This methodology may be combined with a hyperbolic ranging, thus becoming an effective tool to detect TLEs globally with a modest number of networked observational stations.

  9. A Fault Diagnosis Approach for Gears Based on IMF AR Model and SVM

    Directory of Open Access Journals (Sweden)

    Yu Yang

    2008-05-01

    Full Text Available An accurate autoregressive (AR model can reflect the characteristics of a dynamic system based on which the fault feature of gear vibration signal can be extracted without constructing mathematical model and studying the fault mechanism of gear vibration system, which are experienced by the time-frequency analysis methods. However, AR model can only be applied to stationary signals, while the gear fault vibration signals usually present nonstationary characteristics. Therefore, empirical mode decomposition (EMD, which can decompose the vibration signal into a finite number of intrinsic mode functions (IMFs, is introduced into feature extraction of gear vibration signals as a preprocessor before AR models are generated. On the other hand, by targeting the difficulties of obtaining sufficient fault samples in practice, support vector machine (SVM is introduced into gear fault pattern recognition. In the proposed method in this paper, firstly, vibration signals are decomposed into a finite number of intrinsic mode functions, then the AR model of each IMF component is established; finally, the corresponding autoregressive parameters and the variance of remnant are regarded as the fault characteristic vectors and used as input parameters of SVM classifier to classify the working condition of gears. The experimental analysis results show that the proposed approach, in which IMF AR model and SVM are combined, can identify working condition of gears with a success rate of 100% even in the case of smaller number of samples.

  10. Adaptive Fault-Tolerant Routing in 2D Mesh with Cracky Rectangular Model

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2014-01-01

    Full Text Available This paper mainly focuses on routing in two-dimensional mesh networks. We propose a novel faulty block model, which is cracky rectangular block, for fault-tolerant adaptive routing. All the faulty nodes and faulty links are surrounded in this type of block, which is a convex structure, in order to avoid routing livelock. Additionally, the model constructs the interior spanning forest for each block in order to keep in touch with the nodes inside of each block. The procedure for block construction is dynamically and totally distributed. The construction algorithm is simple and ease of implementation. And this is a fully adaptive block which will dynamically adjust its scale in accordance with the situation of networks, either the fault emergence or the fault recovery, without shutdown of the system. Based on this model, we also develop a distributed fault-tolerant routing algorithm. Then we give the formal proof for this algorithm to guarantee that messages will always reach their destinations if and only if the destination nodes keep connecting with these mesh networks. So the new model and routing algorithm maximize the availability of the nodes in networks. This is a noticeable overall improvement of fault tolerability of the system.

  11. Evolution of strike-slip fault systems and associated geomorphic structures. Model test

    International Nuclear Information System (INIS)

    Ueta, Keichi

    2003-01-01

    Sandbox experiments were performed to investigate evolution of fault systems and its associated geomorphic structures caused by strike-slip motion on basement faults. A 200 cm long, 40 cm wide, 25 cm high sandbox was used in a strike-slip fault model test. Computerized X-ray tomography applied to the sandbox experiments made it possible to analyze the kinematic evaluation, as well as the three-dimensional geometry, of the faults. The deformation of the sand pack surface was analyzed by use of a laser method 3D scanner, which is a three-dimensional noncontact surface profiling instrument. A comparison of the experimental results with natural cases of active faults reveals the following: In the left-lateral strike-slip fault experiments, the deformation of the sand pack with increasing basement displacement is observed as follows. 1) In three dimensions, the right-stepping shears that have a cirque'/'shell'/'shipbody' shape develop on both sides of the basement fault. The shears on one side of the basement fault join those on the other side, resulting in helicoidal shaped shear surfaces. Shears reach the surface of the sand near or above the basement fault and en echelon Riedel shears are observed at the surface of the sand. The region between two Riedels is always an up-squeezed block. 2) lower-angle shears generally branch off from the first Riedel shears. 3) Pressure ridges develop within the zone defined by the right-stepping helicoidal shaped lower-angle shears. 4) Grabens develop between the pressure ridges. 5) Y-shears offset the pressure ridges. 6) With displacement concentrated on the central throughgoing fault zone, a liner trough developed directly above the basement fault. R1 shear and P foliation are observed in the liner trough. Such evolution of the shears and its associated structures in the fault model tests agrees well with that of strike-slip fault systems and its associated geomorphic structures. (author)

  12. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  13. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  14. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  15. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  16. Lattice functions, wavelet aliasing, and SO(3) mappings of orthonormal filters

    Science.gov (United States)

    John, Sarah

    1998-01-01

    A formulation of multiresolution in terms of a family of dyadic lattices {Sj;j∈Z} and filter matrices Mj⊂U(2)⊂GL(2,C) illuminates the role of aliasing in wavelets and provides exact relations between scaling and wavelet filters. By showing the {DN;N∈Z+} collection of compactly supported, orthonormal wavelet filters to be strictly SU(2)⊂U(2), its representation in the Euler angles of the rotation group SO(3) establishes several new results: a 1:1 mapping of the {DN} filters onto a set of orbits on the SO(3) manifold; an equivalence of D∞ to the Shannon filter; and a simple new proof for a criterion ruling out pathologically scaled nonorthonormal filters.

  17. Application of Shannon Wavelet Entropy and Shannon Wavelet Packet Entropy in Analysis of Power System Transient Signals

    Directory of Open Access Journals (Sweden)

    Jikai Chen

    2016-12-01

    Full Text Available In a power system, the analysis of transient signals is the theoretical basis of fault diagnosis and transient protection theory. Shannon wavelet entropy (SWE and Shannon wavelet packet entropy (SWPE are powerful mathematics tools for transient signal analysis. Combined with the recent achievements regarding SWE and SWPE, their applications are summarized in feature extraction of transient signals and transient fault recognition. For wavelet aliasing at adjacent scale of wavelet decomposition, the impact of wavelet aliasing is analyzed for feature extraction accuracy of SWE and SWPE, and their differences are compared. Meanwhile, the analyses mentioned are verified by partial discharge (PD feature extraction of power cable. Finally, some new ideas and further researches are proposed in the wavelet entropy mechanism, operation speed and how to overcome wavelet aliasing.

  18. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.; Haves, Philip; Sohn, Michael D.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models are imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.

  19. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  20. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin

    2016-04-06

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  1. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin; Shearer, Peter; Ampuero, Jean‐Paul; Lay, Thorne

    2016-01-01

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  2. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  3. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    Science.gov (United States)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  4. Simulation model of a transient fault controller for an active-stall wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Jauch, C.; Soerensen, P.; Bak Jensen, B.

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operation in case of such faults. The design of the controller is described and its performance assessed by simulations. The control strategies are explained and the behaviour of the turbine discussed. (author)

  5. Triggered dynamics in a model of different fault creep regimes.

    Science.gov (United States)

    Kostić, Srđan; Franović, Igor; Perc, Matjaž; Vasović, Nebojša; Todorović, Kristina

    2014-06-23

    The study is focused on the effect of transient external force induced by a passing seismic wave on fault motion in different creep regimes. Displacement along the fault is represented by the movement of a spring-block model, whereby the uniform and oscillatory motion correspond to the fault dynamics in post-seismic and inter-seismic creep regime, respectively. The effect of the external force is introduced as a change of block acceleration in the form of a sine wave scaled by an exponential pulse. Model dynamics is examined for variable parameters of the induced acceleration changes in reference to periodic oscillations of the unperturbed system above the supercritical Hopf bifurcation curve. The analysis indicates the occurrence of weak irregular oscillations if external force acts in the post-seismic creep regime. When fault motion is exposed to external force in the inter-seismic creep regime, one finds the transition to quasiperiodic- or chaos-like motion, which we attribute to the precursory creep regime and seismic motion, respectively. If the triggered acceleration changes are of longer duration, a reverse transition from inter-seismic to post-seismic creep regime is detected on a larger time scale.

  6. Modeling earthquake magnitudes from injection-induced seismicity on rough faults

    Science.gov (United States)

    Maurer, J.; Dunham, E. M.; Segall, P.

    2017-12-01

    It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.

  7. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    Science.gov (United States)

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Modeling of periodic great earthquakes on the San Andreas fault: Effects of nonlinear crustal rheology

    Science.gov (United States)

    Reches, Ze'ev; Schubert, Gerald; Anderson, Charles

    1994-01-01

    We analyze the cycle of great earthquakes along the San Andreas fault with a finite element numerical model of deformation in a crust with a nonlinear viscoelastic rheology. The viscous component of deformation has an effective viscosity that depends exponentially on the inverse absolute temperature and nonlinearity on the shear stress; the elastic deformation is linear. Crustal thickness and temperature are constrained by seismic and heat flow data for California. The models are for anti plane strain in a 25-km-thick crustal layer having a very long, vertical strike-slip fault; the crustal block extends 250 km to either side of the fault. During the earthquake cycle that lasts 160 years, a constant plate velocity v(sub p)/2 = 17.5 mm yr is applied to the base of the crust and to the vertical end of the crustal block 250 km away from the fault. The upper half of the fault is locked during the interseismic period, while its lower half slips at the constant plate velocity. The locked part of the fault is moved abruptly 2.8 m every 160 years to simulate great earthquakes. The results are sensitive to crustal rheology. Models with quartzite-like rheology display profound transient stages in the velocity, displacement, and stress fields. The predicted transient zone extends about 3-4 times the crustal thickness on each side of the fault, significantly wider than the zone of deformation in elastic models. Models with diabase-like rheology behave similarly to elastic models and exhibit no transient stages. The model predictions are compared with geodetic observations of fault-parallel velocities in northern and central California and local rates of shear strain along the San Andreas fault. The observations are best fit by models which are 10-100 times less viscous than a quartzite-like rheology. Since the lower crust in California is composed of intermediate to mafic rocks, the present result suggests that the in situ viscosity of the crustal rock is orders of magnitude

  9. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  10. Fault Tolerance Assistant (FTA): An Exception Handling Programming Model for MPI Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Aiman [Univ. of Chicago, IL (United States). Dept. of Computer Science; Laguna, Ignacio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sato, Kento [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Islam, Tanzima [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-23

    Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enables failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.

  11. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Muhammad Sohaib

    2017-12-01

    Full Text Available Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE-based deep neural networks (DNNs to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs and backpropagation neural networks (BPNNs.

  12. A Hybrid Feature Model and Deep-Learning-Based Bearing Fault Diagnosis.

    Science.gov (United States)

    Sohaib, Muhammad; Kim, Cheol-Hong; Kim, Jong-Myon

    2017-12-11

    Bearing fault diagnosis is imperative for the maintenance, reliability, and durability of rotary machines. It can reduce economical losses by eliminating unexpected downtime in industry due to failure of rotary machines. Though widely investigated in the past couple of decades, continued advancement is still desirable to improve upon existing fault diagnosis techniques. Vibration acceleration signals collected from machine bearings exhibit nonstationary behavior due to variable working conditions and multiple fault severities. In the current work, a two-layered bearing fault diagnosis scheme is proposed for the identification of fault pattern and crack size for a given fault type. A hybrid feature pool is used in combination with sparse stacked autoencoder (SAE)-based deep neural networks (DNNs) to perform effective diagnosis of bearing faults of multiple severities. The hybrid feature pool can extract more discriminating information from the raw vibration signals, to overcome the nonstationary behavior of the signals caused by multiple crack sizes. More discriminating information helps the subsequent classifier to effectively classify data into the respective classes. The results indicate that the proposed scheme provides satisfactory performance in diagnosing bearing defects of multiple severities. Moreover, the results also demonstrate that the proposed model outperforms other state-of-the-art algorithms, i.e., support vector machines (SVMs) and backpropagation neural networks (BPNNs).

  13. Model based fault diagnosis in a centrifugal pump application using structural analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  14. Model Based Fault Diagnosis in a Centrifugal Pump Application using Structural Analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  15. Model-based fault diagnosis approach on external short circuit of lithium-ion battery used in electric vehicles

    International Nuclear Information System (INIS)

    Chen, Zeyu; Xiong, Rui; Tian, Jinpeng; Shang, Xiong; Lu, Jiahuan

    2016-01-01

    Highlights: • The characteristics of ESC fault of lithium-ion battery are investigated experimentally. • The proposed method to simulate the electrical behavior of ESC fault is viable. • Ten parameters in the presented fault model were optimized using a DPSO algorithm. • A two-layer model-based fault diagnosis approach for battery ESC is proposed. • The effective and robustness of the proposed algorithm has been evaluated. - Abstract: This study investigates the external short circuit (ESC) fault characteristics of lithium-ion battery experimentally. An experiment platform is established and the ESC tests are implemented on ten 18650-type lithium cells considering different state-of-charges (SOCs). Based on the experiment results, several efforts have been made. (1) The ESC process can be divided into two periods and the electrical and thermal behaviors within these two periods are analyzed. (2) A modified first-order RC model is employed to simulate the electrical behavior of the lithium cell in the ESC fault process. The model parameters are re-identified by a dynamic-neighborhood particle swarm optimization algorithm. (3) A two-layer model-based ESC fault diagnosis algorithm is proposed. The first layer conducts preliminary fault detection and the second layer gives a precise model-based diagnosis. Four new cells are short-circuited to evaluate the proposed algorithm. It shows that the ESC fault can be diagnosed within 5 s, the error between the model and measured data is less than 0.36 V. The effectiveness of the fault diagnosis algorithm is not sensitive to the precision of battery SOC. The proposed algorithm can still make the correct diagnosis even if there is 10% error in SOC estimation.

  16. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2010-02-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  17. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  18. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  19. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  20. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  1. Three-dimensional numerical modeling of the influence of faults on groundwater flow at Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Andrew J.B. [Univ. of California, Berkeley, CA (United States)

    1999-06-01

    Numerical simulations of groundwater flow at Yucca Mountain, Nevada are used to investigate how the faulted hydrogeologic structure influences groundwater flow from a proposed high-level nuclear waste repository. Simulations are performed using a 3-D model that has a unique grid block discretization to accurately represent the faulted geologic units, which have variable thicknesses and orientations. Irregular grid blocks enable explicit representation of these features. Each hydrogeologic layer is discretized into a single layer of irregular and dipping grid blocks, and faults are discretized such that they are laterally continuous and displacement varies along strike. In addition, the presence of altered fault zones is explicitly modeled, as appropriate. The model has 23 layers and 11 faults, and approximately 57,000 grid blocks and 200,000 grid block connections. In the past, field measurement of upward vertical head gradients and high water table temperatures near faults were interpreted as indicators of upwelling from a deep carbonate aquifer. Simulations show, however, that these features can be readily explained by the geometry of hydrogeologic layers, the variability of layer permeabilities and thermal conductivities, and by the presence of permeable fault zones or faults with displacement only. In addition, a moderate water table gradient can result from fault displacement or a laterally continuous low permeability fault zone, but not from a high permeability fault zone, as others postulated earlier. Large-scale macrodispersion results from the vertical and lateral diversion of flow near the contact of high and low permeability layers at faults, and from upward flow within high permeability fault zones. Conversely, large-scale channeling can occur due to groundwater flow into areas with minimal fault displacement. Contaminants originating at the water table can flow in a direction significantly different than that of the water table gradient, and isolated

  2. Three-dimensional numerical modeling of the influence of faults on groundwater flow at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Cohen, Andrew J.B.

    1999-01-01

    Numerical simulations of groundwater flow at Yucca Mountain, Nevada are used to investigate how the faulted hydrogeologic structure influences groundwater flow from a proposed high-level nuclear waste repository. Simulations are performed using a 3-D model that has a unique grid block discretization to accurately represent the faulted geologic units, which have variable thicknesses and orientations. Irregular grid blocks enable explicit representation of these features. Each hydrogeologic layer is discretized into a single layer of irregular and dipping grid blocks, and faults are discretized such that they are laterally continuous and displacement varies along strike. In addition, the presence of altered fault zones is explicitly modeled, as appropriate. The model has 23 layers and 11 faults, and approximately 57,000 grid blocks and 200,000 grid block connections. In the past, field measurement of upward vertical head gradients and high water table temperatures near faults were interpreted as indicators of upwelling from a deep carbonate aquifer. Simulations show, however, that these features can be readily explained by the geometry of hydrogeologic layers, the variability of layer permeabilities and thermal conductivities, and by the presence of permeable fault zones or faults with displacement only. In addition, a moderate water table gradient can result from fault displacement or a laterally continuous low permeability fault zone, but not from a high permeability fault zone, as others postulated earlier. Large-scale macrodispersion results from the vertical and lateral diversion of flow near the contact of high and low permeability layers at faults, and from upward flow within high permeability fault zones. Conversely, large-scale channeling can occur due to groundwater flow into areas with minimal fault displacement. Contaminants originating at the water table can flow in a direction significantly different than that of the water table gradient, and isolated

  3. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  4. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  5. Model-Based Fault Detection and Isolation of a Liquid-Cooled Frequency Converter on a Wind Turbine

    DEFF Research Database (Denmark)

    Li, Peng; Odgaard, Peter Fogh; Stoustrup, Jakob

    2012-01-01

    advanced fault detection and isolation schemes. In this paper, an observer-based fault detection and isolation method for the cooling system in a liquid-cooled frequency converter on a wind turbine which is built up in a scalar version in the laboratory is presented. A dynamic model of the scale cooling...... system is derived based on energy balance equation. A fault analysis is conducted to determine the severity and occurrence rate of possible component faults and their end effects in the cooling system. A method using unknown input observer is developed in order to detect and isolate the faults based...... on the developed dynamical model. The designed fault detection and isolation algorithm is applied on a set of measured experiment data in which different faults are artificially introduced to the scaled cooling system. The experimental results conclude that the different faults are successfully detected...

  6. Data-Reconciliation Based Fault-Tolerant Model Predictive Control for a Biomass Boiler

    Directory of Open Access Journals (Sweden)

    Palash Sarkar

    2017-02-01

    Full Text Available This paper presents a novel, effective method to handle critical sensor faults affecting a control system devised to operate a biomass boiler. In particular, the proposed method consists of integrating a data reconciliation algorithm in a model predictive control loop, so as to annihilate the effects of faults occurring in the sensor of the flue gas oxygen concentration, by feeding the controller with the reconciled measurements. Indeed, the oxygen content in flue gas is a key variable in control of biomass boilers due its close connections with both combustion efficiency and polluting emissions. The main benefit of including the data reconciliation algorithm in the loop, as a fault tolerant component, with respect to applying standard fault tolerant methods, is that controller reconfiguration is not required anymore, since the original controller operates on the restored, reliable data. The integrated data reconciliation–model predictive control (MPC strategy has been validated by running simulations on a specific type of biomass boiler—the KPA Unicon BioGrate boiler.

  7. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    Science.gov (United States)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress

  8. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  9. Analysis of Fault Permeability Using Mapping and Flow Modeling, Hickory Sandstone Aquifer, Central Texas

    Energy Technology Data Exchange (ETDEWEB)

    Nieto Camargo, Jorge E., E-mail: jorge.nietocamargo@aramco.com; Jensen, Jerry L., E-mail: jjensen@ucalgary.ca [University of Calgary, Department of Chemical and Petroleum Engineering (Canada)

    2012-09-15

    Reservoir compartments, typical targets for infill well locations, are commonly created by faults that may reduce permeability. A narrow fault may consist of a complex assemblage of deformation elements that result in spatially variable and anisotropic permeabilities. We report on the permeability structure of a km-scale fault sampled through drilling a faulted siliciclastic aquifer in central Texas. Probe and whole-core permeabilities, serial CAT scans, and textural and structural data from the selected core samples are used to understand permeability structure of fault zones and develop predictive models of fault zone permeability. Using numerical flow simulation, it is possible to predict permeability anisotropy associated with faults and evaluate the effect of individual deformation elements in the overall permeability tensor. We found relationships between the permeability of the host rock and those of the highly deformed (HD) fault-elements according to the fault throw. The lateral continuity and predictable permeability of the HD fault elements enhance capability for estimating the effects of subseismic faulting on fluid flow in low-shale reservoirs.

  10. Improving fault image by determination of optimum seismic survey parameters using ray-based modeling

    Science.gov (United States)

    Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali

    2018-06-01

    In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.

  11. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  12. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    Science.gov (United States)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  13. Guaranteed Cost Fault-Tolerant Control for Networked Control Systems with Sensor Faults

    Directory of Open Access Journals (Sweden)

    Qixin Zhu

    2015-01-01

    Full Text Available For the large scale and complicated structure of networked control systems, time-varying sensor faults could inevitably occur when the system works in a poor environment. Guaranteed cost fault-tolerant controller for the new networked control systems with time-varying sensor faults is designed in this paper. Based on time delay of the network transmission environment, the networked control systems with sensor faults are modeled as a discrete-time system with uncertain parameters. And the model of networked control systems is related to the boundary values of the sensor faults. Moreover, using Lyapunov stability theory and linear matrix inequalities (LMI approach, the guaranteed cost fault-tolerant controller is verified to render such networked control systems asymptotically stable. Finally, simulations are included to demonstrate the theoretical results.

  14. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  15. Stresses in faulted tunnel models by photoelasticity and adaptive finite element

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Huang, Y.

    1995-01-01

    Research efforts in this area continue to investigate the development of a proper technique to analyze the stresses in the Ghost Dance fault and the effect of the fault on the stability of drifts in the proposed repository. Results from two parallel techniques are being compared to each other - Photoelastic models and Finite Element (FE) models. The Photoelastic plexiglass model (88.89 mm thick ampersand 256.1 mm long and wide) has two adjacent spare openings (57.95 mm long and wide) and a central round opening (57.95 mm diameter) placed at a clear distance approximately equal to its diameter from the square openings. The vertical loading on top of the model is 2269 N (500 lb.). Saw cuts (0.5388 mm wide), representing a fault, are being propagated from the tunnels outward with stress measurements taken at predefined locations, as the saw cuts increase in length. The FE model duplicates exactly the Photoelastic models. The adaptive mesh generation method is used to refine the FE grid at every step of the analysis. This nonlinear interactive computational techniques uses various uses various percent tolerance errors in the convergence of stress values as a measure in ending the iterative process

  16. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    Science.gov (United States)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  17. Application of improved degree of grey incidence analysis model in fault diagnosis of steam generator

    International Nuclear Information System (INIS)

    Zhao Xinwen; Ren Xin

    2014-01-01

    In order to further reduce the misoperation after the faults occurring of nuclear-powered system in marine, the model based on weighted degree of grey incidence of optimized entropy and fault diagnosis system are proposed, and some simulation experiments about the typical faults of steam generator of nuclear-powered system in marine are conducted. And the results show that the diagnosis system based on improved degree of grey incidence model is more stable and its conclusion is right, and can satisfy diagnosis in real time, and higher faults subjection degrees resolving power can be achieved. (authors)

  18. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  19. Fault diagnosis of locomotive electro-pneumatic brake through uncertain bond graph modeling and robust online monitoring

    Science.gov (United States)

    Niu, Gang; Zhao, Yajun; Defoort, Michael; Pecht, Michael

    2015-01-01

    To improve reliability, safety and efficiency, advanced methods of fault detection and diagnosis become increasingly important for many technical fields, especially for safety related complex systems like aircraft, trains, automobiles, power plants and chemical plants. This paper presents a robust fault detection and diagnostic scheme for a multi-energy domain system that integrates a model-based strategy for system fault modeling and a data-driven approach for online anomaly monitoring. The developed scheme uses LFT (linear fractional transformations)-based bond graph for physical parameter uncertainty modeling and fault simulation, and employs AAKR (auto-associative kernel regression)-based empirical estimation followed by SPRT (sequential probability ratio test)-based threshold monitoring to improve the accuracy of fault detection. Moreover, pre- and post-denoising processes are applied to eliminate the cumulative influence of parameter uncertainty and measurement uncertainty. The scheme is demonstrated on the main unit of a locomotive electro-pneumatic brake in a simulated experiment. The results show robust fault detection and diagnostic performance.

  20. Exploring tectonomagmatic controls on mid-ocean ridge faulting and morphology with 3-D numerical models

    Science.gov (United States)

    Howell, S. M.; Ito, G.; Behn, M. D.; Olive, J. A. L.; Kaus, B.; Popov, A.; Mittelstaedt, E. L.; Morrow, T. A.

    2016-12-01

    Previous two-dimensional (2-D) modeling studies of abyssal-hill scale fault generation and evolution at mid-ocean ridges have predicted that M, the ratio of magmatic to total extension, strongly influences the total slip, spacing, and rotation of large faults, as well as the morphology of the ridge axis. Scaling relations derived from these 2-D models broadly explain the globally observed decrease in abyssal hill spacing with increasing ridge spreading rate, as well as the formation of large-offset faults close to the ends of slow-spreading ridge segments. However, these scaling relations do not explain some higher resolution observations of segment-scale variability in fault spacing along the Chile Ridge and the Mid-Atlantic Ridge, where fault spacing shows no obvious correlation with M. This discrepancy between observations and 2-D model predictions illuminates the need for three-dimensional (3-D) numerical models that incorporate the effects of along-axis variations in lithospheric structure and magmatic accretion. To this end, we use the geodynamic modeling software LaMEM to simulate 3-D tectono-magmatic interactions in a visco-elasto-plastic lithosphere under extension. We model a single ridge segment subjected to an along-axis gradient in the rate of magma injection, which is simulated by imposing a mass source in a plane of model finite volumes beneath the ridge axis. Outputs of interest include characteristic fault offset, spacing, and along-axis gradients in seafloor morphology. We also examine the effects of along-axis variations in lithospheric thickness and off-axis thickening rate. The main objectives of this study are to quantify the relative importance of the amount of magmatic extension and the local lithospheric structure at a given along-axis location, versus the importance of along-axis communication of lithospheric stresses on the 3-D fault evolution and morphology of intermediate-spreading-rate ridges.

  1. Degradation Assessment and Fault Diagnosis for Roller Bearing Based on AR Model and Fuzzy Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Lingli Jiang

    2011-01-01

    Full Text Available This paper proposes a new approach combining autoregressive (AR model and fuzzy cluster analysis for bearing fault diagnosis and degradation assessment. AR model is an effective approach to extract the fault feature, and is generally applied to stationary signals. However, the fault vibration signals of a roller bearing are non-stationary and non-Gaussian. Aiming at this problem, the set of parameters of the AR model is estimated based on higher-order cumulants. Consequently, the AR parameters are taken as the feature vectors, and fuzzy cluster analysis is applied to perform classification and pattern recognition. Experiments analysis results show that the proposed method can be used to identify various types and severities of fault bearings. This study is significant for non-stationary and non-Gaussian signal analysis, fault diagnosis and degradation assessment.

  2. Wilshire fault: Earthquakes in Hollywood?

    Science.gov (United States)

    Hummon, Cheryl; Schneider, Craig L.; Yeats, Robert S.; Dolan, James F.; Sieh, Kerry E.; Huftile, Gary J.

    1994-04-01

    The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

  3. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2017-01-01

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  4. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi

    2017-12-14

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  5. 78 FR 69927 - In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases...

    Science.gov (United States)

    2013-11-21

    ... DEPARTMENT OF STATE [Public Notice 8527] In the Matter of the Review of the Designation of the Kurdistan Worker's Party (and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Record...

  6. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  7. Width and dip of the southern San Andreas Fault at Salt Creek from modeling of geophysical data

    Science.gov (United States)

    Langenheim, Victoria; Athens, Noah D.; Scheirer, Daniel S.; Fuis, Gary S.; Rymer, Michael J.; Goldman, Mark R.; Reynolds, Robert E.

    2014-01-01

    We investigate the geometry and width of the southernmost stretch of the San Andreas Fault zone using new gravity and magnetic data along line 7 of the Salton Seismic Imaging Project. In the Salt Creek area of Durmid Hill, the San Andreas Fault coincides with a complex magnetic signature, with high-amplitude, short-wavelength magnetic anomalies superposed on a broader magnetic anomaly that is at least 5 km wide centered 2–3 km northeast of the fault. Marine magnetic data show that high-frequency magnetic anomalies extend more than 1 km west of the mapped trace of the San Andreas Fault. Modeling of magnetic data is consistent with a moderate to steep (> 50 degrees) northeast dip of the San Andreas Fault, but also suggests that the sedimentary sequence is folded west of the fault, causing the short wavelength of the anomalies west of the fault. Gravity anomalies are consistent with the previously modeled seismic velocity structure across the San Andreas Fault. Modeling of gravity data indicates a steep dip for the San Andreas Fault, but does not resolve unequivocally the direction of dip. Gravity data define a deeper basin, bounded by the Powerline and Hot Springs Faults, than imaged by the seismic experiment. This basin extends southeast of Line 7 for nearly 20 km, with linear margins parallel to the San Andreas Fault. These data suggest that the San Andreas Fault zone is wider than indicated by its mapped surface trace.

  8. Fault detection in IRIS reactor secondary loop using inferential models

    International Nuclear Information System (INIS)

    Perillo, Sergio R.P.; Upadhyaya, Belle R.; Hines, J. Wesley

    2013-01-01

    The development of fault detection algorithms is well-suited for remote deployment of small and medium reactors, such as the IRIS, and the development of new small modular reactors (SMR). However, an extensive number of tests are still to be performed for new engineering aspects and components that are not yet proven technology in the current PWRs, and present some technological challenges for its deployment since many of its features cannot be proven until a prototype plant is built. In this work, an IRIS plant simulation platform was developed using a Simulink® model. The dynamic simulation was utilized in obtaining inferential models that were used to detect faults artificially added to the secondary system simulations. The implementation of data-driven models and the results are discussed. (author)

  9. A Fault Diagnosis Model of Surface to Air Missile Equipment Based on Wavelet Transformation and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Zhheng Ni

    2016-01-01

    Full Text Available At present, the fault signals of surface to air missile equipment are hard to collect and the accuracy of fault diagnosis is very low. To solve the above problems, based on the superiority of wavelet transformation on processing non-stationary signals and the advantage of SVM on pattern classification, this paper proposes a fault diagnosis model and takes the typical analog circuit diagnosis of one power distribution system as an example to verify the fault diagnosis model based on Wavelet Transformation and SVM. The simulation results show that the model is able to achieve fault diagnosis based on a small amount of training samples, which improves the accuracy of fault diagnosis.

  10. San Onofre/Zion auxiliary feedwater system seismic fault tree modeling

    International Nuclear Information System (INIS)

    Najafi, B.; Eide, S.

    1982-02-01

    As part of the study for the seismic evaluation of the San Onofre Unit 1 Auxiliary Feedwater System (AFWS), a fault tree model was developed capable of handling the effect of structural failure of the plant (in the event of an earthquake) on the availability of the AFWS. A compatible fault tree model was developed for the Zion Unit 1 AFWS in order to compare the results of the two systems. It was concluded that if a single failure of the San Onofre Unit 1 AFWS is to be prevented, some weight existing, locally operated locked open manual valves have to be used for isolation of a rupture in specific parts of the AFWS pipings

  11. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  12. Robust Mpc for Actuator–Fault Tolerance Using Set–Based Passive Fault Detection and Active Fault Isolation

    Directory of Open Access Journals (Sweden)

    Xu Feng

    2017-03-01

    Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

  13. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Science.gov (United States)

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  14. Dynamic rupture models of earthquakes on the Bartlett Springs Fault, Northern California

    Science.gov (United States)

    Lozos, Julian C.; Harris, Ruth A.; Murray, Jessica R.; Lienkaemper, James J.

    2015-01-01

    The Bartlett Springs Fault (BSF), the easternmost branch of the northern San Andreas Fault system, creeps along much of its length. Geodetic data for the BSF are sparse, and surface creep rates are generally poorly constrained. The two existing geodetic slip rate inversions resolve at least one locked patch within the creeping zones. We use the 3-D finite element code FaultMod to conduct dynamic rupture models based on both geodetic inversions, in order to determine the ability of rupture to propagate into the creeping regions, as well as to assess possible magnitudes for BSF ruptures. For both sets of models, we find that the distribution of aseismic creep limits the extent of coseismic rupture, due to the contrast in frictional properties between the locked and creeping regions.

  15. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  16. One-dimensional modeling of thermal energy produced in a seismic fault

    Science.gov (United States)

    Konga, Guy Pascal; Koumetio, Fidèle; Yemele, David; Olivier Djiogang, Francis

    2017-12-01

    Generally, one observes an anomaly of temperature before a big earthquake. In this paper, we established the expression of thermal energy produced by friction forces between the walls of a seismic fault while considering the dynamic of a one-dimensional spring-block model. It is noted that, before the rupture of a seismic fault, displacements are caused by microseisms. The curves of variation of this thermal energy with time show that, for oscillatory and aperiodic displacement, the thermal energy is accumulated in the same way. The study reveals that thermal energy as well as temperature increases abruptly after a certain amount of time. We suggest that the corresponding time is the start of the anomaly of temperature observed which can be considered as precursory effect of a big seism. We suggest that the thermal energy can heat gases and dilate rocks until they crack. The warm gases can then pass through the cracks towards the surface. The cracks created by thermal energy can also contribute to the rupture of the seismic fault. We also suggest that the theoretical model of thermal energy, produced in seismic fault, associated with a large quantity of experimental data may help in the prediction of earthquakes.

  17. Diagnosis and Fault-tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Kinnaert, Michel; Lunze, Jan

    the applicability of the presented methods. The theoretical results are illustrated by two running examples which are used throughout the book. The book addresses engineering students, engineers in industry and researchers who wish to get a survey over the variety of approaches to process diagnosis and fault......The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process...

  18. Fault-tolerant reference generation for model predictive control with active diagnosis of elevator jamming faults

    NARCIS (Netherlands)

    Ferranti, L.; Wan, Y.; Keviczky, T.

    2018-01-01

    This paper focuses on the longitudinal control of an Airbus passenger aircraft in the presence of elevator jamming faults. In particular, in this paper, we address permanent and temporary actuator jamming faults using a novel reconfigurable fault-tolerant predictive control design. Due to their

  19. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  20. Fault detection in processes represented by PLS models using an EWMA control scheme

    KAUST Repository

    Harrou, Fouzi

    2016-10-20

    Fault detection is important for effective and safe process operation. Partial least squares (PLS) has been used successfully in fault detection for multivariate processes with highly correlated variables. However, the conventional PLS-based detection metrics, such as the Hotelling\\'s T and the Q statistics are not well suited to detect small faults because they only use information about the process in the most recent observation. Exponentially weighed moving average (EWMA), however, has been shown to be more sensitive to small shifts in the mean of process variables. In this paper, a PLS-based EWMA fault detection method is proposed for monitoring processes represented by PLS models. The performance of the proposed method is compared with that of the traditional PLS-based fault detection method through a simulated example involving various fault scenarios that could be encountered in real processes. The simulation results clearly show the effectiveness of the proposed method over the conventional PLS method.

  1. Analysis of the influence of aliasing effect on the digital X ray images

    International Nuclear Information System (INIS)

    Niu Yantao; Liu Zhensheng; Wang Gexin; Zhao Bo; Hao Hui; Yan Shulin

    2007-01-01

    Objective: To investigate the causes and eliminating methods of aliasing effect in digital radiography. Methods: Stationary grid and rectangular wave test phantom were imaged on Kodak CR900 system. Lead strips of phantom were parallel to laser scanning direction or with an angle of 45 degrees when they were exposed on imaging plate. The representation ability for resolution test phantom in two types of images were observed. Grid was imaged when its lead strips are parallel to or perpendicular to laser scanning direction. Two images were observed and contrasted on monitor using various magnifying rate. Results: In phantom images, the lead bats below the frequency of 3.93 linepairs per mm could be discriminated, and it is Nyquist frequency of this system. But the lead bars with the frequency of 4.86 linepairs per mm could even been distinguished in the image of test phantom with an angle of 45 degrees. When grid lead bars were parallel to imaging plate scanning direction, resulting images displayed visable streak artifacts. The display degree has marked difference when grid strips were parallel or perpendicular to laser scanning direction. Streaks were not clear when the image was displayed as true size on monitor, but there widths changed in a large range as zoom in or zoom out. At the same time, the directions of streaks changed. Conclusions: Optimum stationary grids should be selected in clinical practice according to limited resolution of CR system because aliasing effect would cause disadvantageous influence, and grid frequency should be greater than Nyquist frequency. Grid strip direction should be perpendicular to laser scanner direction in clinics to avoid streak artifacts. There are notable affection to image seeming on monitor when using different magnifying rate, and using integral times of real image size were suggested. (authors)

  2. Fault Management: Degradation Signature Detection, Modeling, and Processing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault to Failure Progression (FFP) signature modeling and processing is a new method for applying condition-based signal data to detect degradation, to identify...

  3. Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators

    Directory of Open Access Journals (Sweden)

    Ilias Ouachtouk

    2016-01-01

    Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.

  4. Crustal Density Variation Along the San Andreas Fault Controls Its Secondary Faults Distribution and Dip Direction

    Science.gov (United States)

    Yang, H.; Moresi, L. N.

    2017-12-01

    The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient 200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.

  5. A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne

    2018-02-01

    The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.

  6. Economic modeling of fault tolerant flight control systems in commercial applications

    Science.gov (United States)

    Finelli, G. B.

    1982-01-01

    This paper describes the current development of a comprehensive model which will supply the assessment and analysis capability to investigate the economic viability of Fault Tolerant Flight Control Systems (FTFCS) for commercial aircraft of the 1990's and beyond. An introduction to the unique attributes of fault tolerance and how they will influence aircraft operations and consequent airline costs and benefits is presented. Specific modeling issues and elements necessary for accurate assessment of all costs affected by ownership and operation of FTFCS are delineated. Trade-off factors are presented, aimed at exposing economically optimal realizations of system implementations, resource allocation, and operating policies. A trade-off example is furnished to graphically display some of the analysis capabilities of the comprehensive simulation model now being developed.

  7. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    Science.gov (United States)

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  8. 75 FR 28849 - Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign...

    Science.gov (United States)

    2010-05-24

    ... DEPARTMENT OF STATE [Public Notice 7026] Review of the Designation of Ansar al-Islam (aka Ansar Al-Sunnah and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Records assembled in these...

  9. Newport-Inglewood-Carlsbad-Coronado Bank Fault System Nearshore Southern California: Testing models for Quaternary deformation

    Science.gov (United States)

    Bennett, J. T.; Sorlien, C. C.; Cormier, M.; Bauer, R. L.

    2011-12-01

    The San Andreas fault system is distributed across hundreds of kilometers in southern California. This transform system includes offshore faults along the shelf, slope and basin- comprising part of the Inner California Continental Borderland. Previously, offshore faults have been interpreted as being discontinuous and striking parallel to the coast between Long Beach and San Diego. Our recent work, based on several thousand kilometers of deep-penetration industry multi-channel seismic reflection data (MCS) as well as high resolution U.S. Geological Survey MCS, indicates that many of the offshore faults are more geometrically continuous than previously reported. Stratigraphic interpretations of MCS profiles included the ca. 1.8 Ma Top Lower Pico, which was correlated from wells located offshore Long Beach (Sorlien et. al. 2010). Based on this age constraint, four younger (Late) Quaternary unconformities are interpreted through the slope and basin. The right-lateral Newport-Inglewood fault continues offshore near Newport Beach. We map a single fault for 25 kilometers that continues to the southeast along the base of the slope. There, the Newport-Inglewood fault splits into the San Mateo-Carlsbad fault, which is mapped for 55 kilometers along the base of the slope to a sharp bend. This bend is the northern end of a right step-over of 10 kilometers to the Descanso fault and about 17 km to the Coronado Bank fault. We map these faults for 50 kilometers as they continue over the Mexican border. Both the San Mateo - Carlsbad with the Newport-Inglewood fault and the Coronado Bank with the Descanso fault are paired faults that form flower structures (positive and negative, respectively) in cross section. Preliminary kinematic models indicate ~1km of right-lateral slip since ~1.8 Ma at the north end of the step-over. We are modeling the slip on the southern segment to test our hypothesis for a kinematically continuous right-lateral fault system. We are correlating four

  10. Predictive modelling of fault related fracturing in carbonate damage-zones: analytical and numerical models of field data (Central Apennines, Italy)

    Science.gov (United States)

    Mannino, Irene; Cianfarra, Paola; Salvini, Francesco

    2010-05-01

    Permeability in carbonates is strongly influenced by the presence of brittle deformation patterns, i.e pressure-solution surfaces, extensional fractures, and faults. Carbonate rocks achieve fracturing both during diagenesis and tectonic processes. Attitude, spatial distribution and connectivity of brittle deformation features rule the secondary permeability of carbonatic rocks and therefore the accumulation and the pathway of deep fluids (ground-water, hydrocarbon). This is particularly true in fault zones, where the damage zone and the fault core show different hydraulic properties from the pristine rock as well as between them. To improve the knowledge of fault architecture and faults hydraulic properties we study the brittle deformation patterns related to fault kinematics in carbonate successions. In particular we focussed on the damage-zone fracturing evolution. Fieldwork was performed in Meso-Cenozoic carbonate units of the Latium-Abruzzi Platform, Central Apennines, Italy. These units represent field analogues of rock reservoir in the Southern Apennines. We combine the study of rock physical characteristics of 22 faults and quantitative analyses of brittle deformation for the same faults, including bedding attitudes, fracturing type, attitudes, and spatial intensity distribution by using the dimension/spacing ratio, namely H/S ratio where H is the dimension of the fracture and S is the spacing between two analogous fractures of the same set. Statistical analyses of structural data (stereonets, contouring and H/S transect) were performed to infer a focussed, general algorithm that describes the expected intensity of fracturing process. The analytical model was fit to field measurements by a Montecarlo-convergent approach. This method proved a useful tool to quantify complex relations with a high number of variables. It creates a large sequence of possible solution parameters and results are compared with field data. For each item an error mean value is

  11. All Roads Lead to Fault Diagnosis : Model-Based Reasoning with LYDIA

    NARCIS (Netherlands)

    Feldman, A.B.; Pietersma, J.; Van Gemund, A.J.C.

    2006-01-01

    Model-Based Reasoning (MBR) over qualitative models of complex, real-world systems has proven succesful for automated fault diagnosis, control, and repair. Expressing a system under diagnosis in a formal model and infering a diagnosis given observations are both challenging problems. In this paper

  12. Modeling caprock fracture, CO2 migration and time dependent fault healing: A numerical study.

    Science.gov (United States)

    MacFarlane, J.; Mukerji, T.; Vanorio, T.

    2017-12-01

    The Campi Flegrei caldera, located near Naples, Italy, is one of the highest risk volcanoes on Earth due to its recent unrest and urban setting. A unique history of surface uplift within the caldera is characterized by long duration uplift and subsidence cycles which are periodically interrupted by rapid, short period uplift events. Several models have been proposed to explain this history; in this study we will present a hydro-mechanical model that takes into account the caprock that seismic studies show to exist at 1-2 km depth. Specifically, we develop a finite element model of the caldera and use a modified version of fault-valve theory to represent fracture within the caprock. The model accounts for fault healing using a simplified, time-dependent fault sealing model. Multiple fracture events are incorporated by using previous solutions to test prescribed conditions and determine changes in rock properties, such as porosity and permeability. Although fault-valve theory has been used to model single fractures and recharge, this model is unique in its ability to model multiple fracture events. By incorporating multiple fracture events we can assess changes in both long and short-term reservoir behavior at Campi Flegrei. By varying the model inputs, we model the poro-elastic response to CO2 injection at depth and the resulting surface deformation. The goal is to enable geophysicists to better interpret surface observations and predict outcomes from observed changes in reservoir conditions.

  13. Applying a Cerebellar Model Articulation Controller Neural Network to a Photovoltaic Power Generation System Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Kuei-Hsiang Chao

    2013-01-01

    Full Text Available This study employed a cerebellar model articulation controller (CMAC neural network to conduct fault diagnoses on photovoltaic power generation systems. We composed a module array using 9 series and 2 parallel connections of SHARP NT-R5E3E 175 W photovoltaic modules. In addition, we used data that were outputted under various fault conditions as the training samples for the CMAC and used this model to conduct the module array fault diagnosis after completing the training. The results of the training process and simulations indicate that the method proposed in this study requires fewer number of training times compared to other methods. In addition to significantly increasing the accuracy rate of the fault diagnosis, this model features a short training duration because the training process only tunes the weights of the exited memory addresses. Therefore, the fault diagnosis is rapid, and the detection tolerance of the diagnosis system is enhanced.

  14. A Novel Method of Fault Diagnosis for Rolling Bearing Based on Dual Tree Complex Wavelet Packet Transform and Improved Multiscale Permutation Entropy

    Directory of Open Access Journals (Sweden)

    Guiji Tang

    2016-01-01

    Full Text Available A novel method of fault diagnosis for rolling bearing, which combines the dual tree complex wavelet packet transform (DTCWPT, the improved multiscale permutation entropy (IMPE, and the linear local tangent space alignment (LLTSA with the extreme learning machine (ELM, is put forward in this paper. In this method, in order to effectively discover the underlying feature information, DTCWPT, which has the attractive properties as nearly shift invariance and reduced aliasing, is firstly utilized to decompose the original signal into a set of subband signals. Then, IMPE, which is designed to reduce the variability of entropy measures, is applied to characterize the properties of each obtained subband signal at different scales. Furthermore, the feature vectors are constructed by combining IMPE of each subband signal. After the feature vectors construction, LLTSA is employed to compress the high dimensional vectors of the training and the testing samples into the low dimensional vectors with better distinguishability. Finally, the ELM classifier is used to automatically accomplish the condition identification with the low dimensional feature vectors. The experimental data analysis results validate the effectiveness of the presented diagnosis method and demonstrate that this method can be applied to distinguish the different fault types and fault degrees of rolling bearings.

  15. A universal, fault-tolerant, non-linear analytic network for modeling and fault detection

    International Nuclear Information System (INIS)

    Mott, J.E.; King, R.W.; Monson, L.R.; Olson, D.L.; Staffon, J.D.

    1992-01-01

    The similarities and differences of a universal network to normal neural networks are outlined. The description and application of a universal network is discussed by showing how a simple linear system is modeled by normal techniques and by universal network techniques. A full implementation of the universal network as universal process modeling software on a dedicated computer system at EBR-II is described and example results are presented. It is concluded that the universal network provides different feature recognition capabilities than a neural network and that the universal network can provide extremely fast, accurate, and fault-tolerant estimation, validation, and replacement of signals in a real system

  16. A universal, fault-tolerant, non-linear analytic network for modeling and fault detection

    Energy Technology Data Exchange (ETDEWEB)

    Mott, J.E. [Advanced Modeling Techniques Corp., Idaho Falls, ID (United States); King, R.W.; Monson, L.R.; Olson, D.L.; Staffon, J.D. [Argonne National Lab., Idaho Falls, ID (United States)

    1992-03-06

    The similarities and differences of a universal network to normal neural networks are outlined. The description and application of a universal network is discussed by showing how a simple linear system is modeled by normal techniques and by universal network techniques. A full implementation of the universal network as universal process modeling software on a dedicated computer system at EBR-II is described and example results are presented. It is concluded that the universal network provides different feature recognition capabilities than a neural network and that the universal network can provide extremely fast, accurate, and fault-tolerant estimation, validation, and replacement of signals in a real system.

  17. Singular limit analysis of a model for earthquake faulting

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...

  18. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  19. 75 FR 62173 - In the Matter of the Review of the Designation of Jemaah Islamiya (JI and Other Aliases) as a...

    Science.gov (United States)

    2010-10-07

    ... DEPARTMENT OF STATE [Public Notice: 7196] In the Matter of the Review of the Designation of Jemaah Islamiya (JI and Other Aliases) as a Foreign Terrorist Organization Pursuant to Section 219 of the Immigration and Nationality Act, as Amended Based upon a review of the Administrative Record assembled in this...

  20. Model-based fault detection for proton exchange membrane fuel cell ...

    African Journals Online (AJOL)

    In this paper, an intelligent model-based fault detection (FD) is developed for proton exchange membrane fuel cell (PEMFC) dynamic systems using an independent radial basis function (RBF) networks. The novelty is that this RBF networks is used to model the PEMFC dynamic systems and residuals are generated based ...

  1. Geomechanical Modeling of Fault Responses and the Potential for Notable Seismic Events during Underground CO2 Injection

    Science.gov (United States)

    Rutqvist, J.; Cappa, F.; Mazzoldi, A.; Rinaldi, A.

    2012-12-01

    The importance of geomechanics associated with large-scale geologic carbon storage (GCS) operations is now widely recognized. There are concerns related to the potential for triggering notable (felt) seismic events and how such events could impact the long-term integrity of a CO2 repository (as well as how it could impact the public perception of GCS). In this context, we review a number of modeling studies and field observations related to the potential for injection-induced fault reactivations and seismic events. We present recent model simulations of CO2 injection and fault reactivation, including both aseismic and seismic fault responses. The model simulations were conducted using a slip weakening fault model enabling sudden (seismic) fault rupture, and some of the numerical analyses were extended to fully dynamic modeling of seismic source, wave propagation, and ground motion. The model simulations illustrated what it will take to create a magnitude 3 or 4 earthquake that would not result in any significant damage at the groundsurface, but could raise concerns in the local community and could also affect the deep containment of the stored CO2. The analyses show that the local in situ stress field, fault orientation, fault strength, and injection induced overpressure are critical factors in determining the likelihood and magnitude of such an event. We like to clarify though that in our modeling we had to apply very high injection pressure to be able to intentionally induce any fault reactivation. Consequently, our model simulations represent extreme cases, which in a real GCS operation could be avoided by estimating maximum sustainable injection pressure and carefully controlling the injection pressure. In fact, no notable seismic event has been reported from any of the current CO2 storage projects, although some unfelt microseismic activities have been detected by geophones. On the other hand, potential future commercial GCS operations from large power plants

  2. Cellular modeling of fault-tolerant multicomputers

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, G

    1987-01-01

    Work described was concerned with a novel method for investigation of fault tolerance in large regular networks of computers. Motivation was to provide a technique useful in rapid evaluation of highly reliable systems that exploit the low cost and ease of volume production of simple microcomputer components. First, a system model and simulator based upon cellular automata are developed. This model is characterized by its simplicity and ease of modification when adapting to new types of network. Second, in order to test and verify the predictive capabilities of the cellular system, a more-detailed simulation is performed based upon an existing computational model, that of the Transputer. An example application is used to exercise various systems designed using the cellular model. Using this simulator, experimental results are obtained both for existing well-understood configurations and for more novel types also developed here. In all cases it was found that the cellular model and simulator successfully predicted the ranking in reliability improvement of the systems studied.

  3. Study of fault diagnosis software design for complex system based on fault tree

    International Nuclear Information System (INIS)

    Yuan Run; Li Yazhou; Wang Jianye; Hu Liqin; Wang Jiaqun; Wu Yican

    2012-01-01

    Complex systems always have high-level reliability and safety requirements, and same does their diagnosis work. As a great deal of fault tree models have been acquired during the design and operation phases, a fault diagnosis method which combines fault tree analysis with knowledge-based technology has been proposed. The prototype of fault diagnosis software has been realized and applied to mobile LIDAR system. (authors)

  4. Stochastic Model Predictive Fault Tolerant Control Based on Conditional Value at Risk for Wind Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Yun-Tao Shi

    2018-01-01

    Full Text Available Wind energy has been drawing considerable attention in recent years. However, due to the random nature of wind and high failure rate of wind energy conversion systems (WECSs, how to implement fault-tolerant WECS control is becoming a significant issue. This paper addresses the fault-tolerant control problem of a WECS with a probable actuator fault. A new stochastic model predictive control (SMPC fault-tolerant controller with the Conditional Value at Risk (CVaR objective function is proposed in this paper. First, the Markov jump linear model is used to describe the WECS dynamics, which are affected by many stochastic factors, like the wind. The Markov jump linear model can precisely model the random WECS properties. Second, the scenario-based SMPC is used as the controller to address the control problem of the WECS. With this controller, all the possible realizations of the disturbance in prediction horizon are enumerated by scenario trees so that an uncertain SMPC problem can be transformed into a deterministic model predictive control (MPC problem. Finally, the CVaR object function is adopted to improve the fault-tolerant control performance of the SMPC controller. CVaR can provide a balance between the performance and random failure risks of the system. The Min-Max performance index is introduced to compare the fault-tolerant control performance with the proposed controller. The comparison results show that the proposed method has better fault-tolerant control performance.

  5. A numerical model for modeling microstructure and THM couplings in fault gouges

    Science.gov (United States)

    Veveakis, M.; Rattez, H.; Stefanou, I.; Sulem, J.; Poulet, T.

    2017-12-01

    When materials are subjected to large deformations, most of them experience inelastic deformations, accompanied by a localization of these deformations into a narrow zone leading to failure. Localization is seen as an instability from the homogeneous state of deformation. Therefore a first approach to study it consists at looking at the possible critical conditions for which the constitutive equations of the material allow a bifurcation point (Rudnicki & Rice 1975). But in some cases, we would like to know the evolution of the material after the onset of localization. For example, a fault in the crustal part of the lithosphere is a shear band and the study of this localized zone enables to extract information about seismic slip. For that, we need to approximate the solution of a nonlinear boundary value problem numerically. It is a challenging task due to the complications that arise while dealing with a softening behavior. Indeed, the classical continuum theory cannot be used because the governing system of equations is ill-posed (Vardoulakis 1985). This ill-posedness can be tracked back to the fact that constitutive models don't contain material parameters with the dimension of a length. It leads to what is called "mesh dependency" for numerical simulations, as the deformations localize in only one element of the mesh and the behavior of the system depends thus on the mesh size. A way to regularize the problem is to resort to continuum models with microstructure, such as Cosserat continua (Sulem et al. 2011). Cosserat theory is particularly interesting as it can explicitly take into account the size of the microstructure in a fault gouge. Basically, it introduces 3 degrees of freedom of rotation on top of the 3 translations (Godio et al. 2016). The original work of (Mühlhaus & Vardoulakis 1987) is extended in 3D and thermo-hydro mechanical couplings are added to the model to study fault system in the crustal part of the lithosphere. The system of equations is

  6. Fault Isolation for Shipboard Decision Support

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2010-01-01

    Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation of a containe......Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation...... to the quality of decisions given to navigators....

  7. Modeling and Fault Diagnosis of Interturn Short Circuit for Five-Phase Permanent Magnet Synchronous Motor

    Directory of Open Access Journals (Sweden)

    Jian-wei Yang

    2015-01-01

    Full Text Available Taking advantage of the high reliability, multiphase permanent magnet synchronous motors (PMSMs, such as five-phase PMSM and six-phase PMSM, are widely used in fault-tolerant control applications. And one of the important fault-tolerant control problems is fault diagnosis. In most existing literatures, the fault diagnosis problem focuses on the three-phase PMSM. In this paper, compared to the most existing fault diagnosis approaches, a fault diagnosis method for Interturn short circuit (ITSC fault of five-phase PMSM based on the trust region algorithm is presented. This paper has two contributions. (1 Analyzing the physical parameters of the motor, such as resistances and inductances, a novel mathematic model for ITSC fault of five-phase PMSM is established. (2 Introducing an object function related to the Interturn short circuit ratio, the fault parameters identification problem is reformulated as the extreme seeking problem. A trust region algorithm based parameter estimation method is proposed for tracking the actual Interturn short circuit ratio. The simulation and experimental results have validated the effectiveness of the proposed parameter estimation method.

  8. A Ship Propulsion System Model for Fault-tolerant Control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    This report presents a propulsion system model for a low speed marine vehicle, which can be used as a test benchmark for Fault-Tolerant Control purposes. The benchmark serves the purpose of offering realistic and challenging problems relevant in both FDI and (autonomous) supervisory control area...

  9. Fault-tolerant computing systems

    International Nuclear Information System (INIS)

    Dal Cin, M.; Hohl, W.

    1991-01-01

    Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

  10. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    Science.gov (United States)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  11. Comparing Two Different Approaches to the Modeling of the Common Cause Failures in Fault Trees

    International Nuclear Information System (INIS)

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2002-01-01

    The potential for common cause failures in systems that perform critical functions has been recognized as very important contributor to risk associated with operation of nuclear power plants. Consequentially, modeling of common cause failures (CCF) in fault trees has become one among the essential elements in any probabilistic safety assessment (PSA). Detailed and realistic representation of CCF potential in fault tree structure is sometimes very challenging task. This is especially so in the cases where a common cause group involves more than two components. During the last ten years the difficulties associated with this kind of modeling have been overcome to some degree by development of integral PSA tools with high capabilities. Some of them allow for the definition of CCF groups and their automated expanding in the process of Boolean resolution and generation of minimal cutsets. On the other hand, in PSA models developed and run by more traditional tools, CCF-potential had to be modeled in the fault trees explicitly. With explicit CCF modeling, fault trees can grow very large, especially in the cases when they involve CCF groups with 3 or more members, which can become an issue for the management of fault trees and basic events with traditional non-integral PSA models. For these reasons various simplifications had to be made. Speaking in terms of an overall PSA model, there are also some other issues that need to be considered, such as maintainability and accessibility of the model. In this paper a comparison is made between the two approaches to CCF modeling. Analysis is based on a full-scope Level 1 PSA model for internal initiating events that had originally been developed by a traditional PSA tool and later transferred to a new-generation PSA tool with automated CCF modeling capabilities. Related aspects and issues mentioned above are discussed in the paper. (author)

  12. Hydrogeological measurements and modelling of the Down Ampney Fault Research site

    International Nuclear Information System (INIS)

    Brightman, M.A.; Sen, M.A.; Abbott, M.A.W.

    1991-01-01

    The British Geological Survey, in cooperation with ISMES of Italy, is carrying out a research programme into the properties of faults cutting clay formations. The programme has two major aims; firstly, to develop geophysical techniques to locate and measure the geophysical properties of a fault in clay; secondly, to measure the hydrogeological properties of the fault and its effect on the groundwater flow pattern through a sequence of clays and aquifers. Analysis of pulse tests performed in the clays at the Down Ampney Research site gave values of hydraulic conductivity ranging from 5 x 10 -12 to 2 x 10 -8 ms -1 . Numerical modelling of the effects of groundwater abstraction from nearby wells on the site was performed using the finite element code FEMWATER. The results are discussed. (Author)

  13. Modeling the effect of preexisting joints on normal fault geometries using a brittle and cohesive material

    Science.gov (United States)

    Kettermann, M.; van Gent, H. W.; Urai, J. L.

    2012-04-01

    Brittle rocks, such as for example those hosting many carbonate or sandstone reservoirs, are often affected by different kinds of fractures that influence each other. Understanding the effects of these interactions on fault geometries and the formation of cavities and potential fluid pathways might be useful for reservoir quality prediction and production. Analogue modeling has proven to be a useful tool to study faulting processes, although usually the used materials do not provide cohesion and tensile strength, which are essential to create open fractures. Therefore, very fine-grained, cohesive, hemihydrate powder was used for our experiments. The mechanical properties of the material are scaling well for natural prototypes. Due to the fine grain size structures are preserved in in great detail. The used deformation box allows the formation of a half-graben and has initial dimensions of 30 cm width, 28 cm length and 20 cm height. The maximum dip-slip along the 60° dipping predefined basement fault is 4.5 cm and was fully used in all experiments. To setup open joints prior to faulting, sheets of paper placed vertically within the box to a depth of about 5 cm from top. The powder was then sieved into the box, embedding the paper almost entirely. Finally strings were used to remove the paper carefully, leaving open voids. Using this method allows the creation of cohesionless open joints while ensuring a minimum impact on the sensitive surrounding material. The presented series of experiments aims to investigate the effect of different angles between the strike of a rigid basement fault and a distinct joint set. All experiments were performed with a joint spacing of 2.5 cm and the fault-joint angles incrementally covered 0°, 4°, 8°, 12°, 16°, 20° and 25°. During the deformation time lapse photography from the top and side captured every structural change and provided data for post-processing analysis using particle imaging velocimetry (PIV). Additionally

  14. A stacking-fault based microscopic model for platelets in diamond

    Science.gov (United States)

    Antonelli, Alex; Nunes, Ricardo

    2005-03-01

    We propose a new microscopic model for the 001 planar defects in diamond commonly called platelets. This model is based on the formation of a metastable stacking fault, which can occur because of the ability of carbon to stabilize in different bonding configurations. In our model the core of the planar defect is basically a double layer of three-fold coordinated sp^2 carbon atoms embedded in the common sp^3 diamond structure. The properties of the model were determined using ab initio total energy calculations. All significant experimental signatures attributed to the platelets, namely, the lattice displacement along the [001] direction, the asymmetry between the [110] and the [11 0] directions, the infrared absorption peak B^' , and broad luminescence lines that indicate the introduction of levels in the band gap, are naturally accounted for in our model. The model is also very appealing from the point of view of kinetics, since naturally occurring shearing processes will lead to the formation of the metastable fault.Authors acknowledge financial support from the Brazilian agencies FAPESP, CNPq, FAEP-UNICAMP, FAPEMIG, and Instituto do Milênio em Nanociências-MCT

  15. Fault diagnostics in power transformer model winding for different alpha values

    Directory of Open Access Journals (Sweden)

    G.H. Kusumadevi

    2015-09-01

    Full Text Available Transient overvoltages appearing at line terminal of power transformer HV windings can cause failure of winding insulation. The failure can be from winding to ground or between turns or sections of winding. In most of the cases, failure from winding to ground can be detected by changes in the wave shape of surge voltage appearing at line terminal. However, detection of insulation failure between turns may be difficult due to intricacies involved in identifications of faults. In this paper, simulation investigations carried out on a power transformer model winding for identifying faults between turns of winding has been reported. The power transformer HV winding has been represented by 8 sections, 16 sections and 24 sections. Neutral current waveform has been analyzed for same model winding represented by different number of sections. The values of α (‘α’ value is the square root of total ground capacitance to total series capacitance of winding considered for windings are 5, 10 and 20. Standard lightning impulse voltage (1.2/50 μs wave shape have been considered for analysis. Computer simulations have been carried out using software PSPICE version 10.0. Neutral current and frequency response analysis methods have been used for identification of faults within sections of transformer model winding.

  16. Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems

    Science.gov (United States)

    Ferrell, Bob; Oostdyk, Rebecca

    2010-01-01

    The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project

  17. A study on quantification of unavailability of DPPS with fault tolerant techniques considering fault tolerant techniques' characteristics

    International Nuclear Information System (INIS)

    Kim, B. G.; Kang, H. G.; Kim, H. E.; Seung, P. H.; Kang, H. G.; Lee, S. J.

    2012-01-01

    With the improvement of digital technologies, digital I and C systems have included more various fault tolerant techniques than conventional analog I and C systems have, in order to increase fault detection and to help the system safely perform the required functions in spite of the presence of faults. So, in the reliability evaluation of digital systems, the fault tolerant techniques (FTTs) and their fault coverage must be considered. To consider the effects of FTTs in a digital system, there have been several studies on the reliability of digital model. Therefore, this research based on literature survey attempts to develop a model to evaluate the plant reliability of the digital plant protection system (DPPS) with fault tolerant techniques considering detection and process characteristics and human errors. Sensitivity analysis is performed to ascertain important variables from the fault management coverage and unavailability based on the proposed model

  18. Three-dimensional cellular automata as a model of a seismic fault

    International Nuclear Information System (INIS)

    Gálvez, G; Muñoz, A

    2017-01-01

    The Earth's crust is broken into a series of plates, whose borders are the seismic fault lines and it is where most of the earthquakes occur. This plating system can in principle be described by a set of nonlinear coupled equations describing the motion of the plates, its stresses, strains and other characteristics. Such a system of equations is very difficult to solve, and nonlinear parts leads to a chaotic behavior, which is not predictable. In 1989, Bak and Tang presented an earthquake model based on the sand pile cellular automata. The model though simple, provides similar results to those observed in actual earthquakes. In this work the cellular automata in three dimensions is proposed as a best model to approximate a seismic fault. It is noted that the three-dimensional model reproduces similar properties to those observed in real seismicity, especially, the Gutenberg-Richter law. (paper)

  19. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  20. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  1. Modelling of Surface Fault Structures Based on Ground Magnetic Survey

    Science.gov (United States)

    Michels, A.; McEnroe, S. A.

    2017-12-01

    The island of Leka confines the exposure of the Leka Ophiolite Complex (LOC) which contains mantle and crustal rocks and provides a rare opportunity to study the magnetic properties and response of these formations. The LOC is comprised of five rock units: (1) harzburgite that is strongly deformed, shifting into an increasingly olivine-rich dunite (2) ultramafic cumulates with layers of olivine, chromite, clinopyroxene and orthopyroxene. These cumulates are overlain by (3) metagabbros, which are cut by (4) metabasaltic dykes and (5) pillow lavas (Furnes et al. 1988). Over the course of three field seasons a detailed ground-magnetic survey was made over the island covering all units of the LOC and collecting samples from 109 sites for magnetic measurements. NRM, susceptibility, density and hysteresis properties were measured. In total 66% of samples with a Q value > 1, suggests that the magnetic anomalies should include both induced and remanent components in the model.This Ophiolite originated from a suprasubduction zone near the coast of Laurentia (497±2 Ma), was obducted onto Laurentia (≈460 Ma) and then transferred to Baltica during the Caledonide Orogeny (≈430 Ma). The LOC was faulted, deformed and serpentinized during these events. The gabbro and ultramafic rocks are separated by a normal fault. The dominant magnetic anomaly that crosses the island correlates with this normal fault. There are a series of smaller scale faults that are parallel to this and some correspond to local highs that can be highlighted by a tilt derivative of the magnetic data. These fault boundaries which are well delineated by the distinct magnetic anomalies in both ground and aeromagnetic survey data are likely caused by increased amount of serpentinization of the ultramafic rocks in the fault areas.

  2. Frictional-faulting model for harmonic tremor before Redoubt Volcano eruptions

    Science.gov (United States)

    Dmitrieva, Ksenia; Hotovec-Ellis, Alicia J.; Prejean, Stephanie G.; Dunham, Eric M.

    2013-01-01

    Seismic unrest, indicative of subsurface magma transport and pressure changes within fluid-filled cracks and conduits, often precedes volcanic eruptions. An intriguing form of volcano seismicity is harmonic tremor, that is, sustained vibrations in the range of 0.5–5 Hz. Many source processes can generate harmonic tremor. Harmonic tremor in the 2009 eruption of Redoubt Volcano, Alaska, has been linked to repeating earthquakes of magnitudes around 0.5–1.5 that occur a few kilometres beneath the vent. Before many explosions in that eruption, these small earthquakes occurred in such rapid succession—up to 30 events per second—that distinct seismic wave arrivals blurred into continuous, high-frequency tremor. Tremor abruptly ceased about 30 s before the explosions. Here we introduce a frictional-faulting model to evaluate the credibility and implications of this tremor mechanism. We find that the fault stressing rates rise to values ten orders of magnitude higher than in typical tectonic settings. At that point, inertial effects stabilize fault sliding and the earthquakes cease. Our model of the Redoubt Volcano observations implies that the onset of volcanic explosions is preceded by active deformation and extreme stressing within a localized region of the volcano conduit, at a depth of several kilometres.

  3. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-01

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  4. Fuzzy fault diagnosis system of MCFC

    Institute of Scientific and Technical Information of China (English)

    Wang Zhenlei; Qian Feng; Cao Guangyi

    2005-01-01

    A kind of fault diagnosis system of molten carbonate fuel cell (MCFC) stack is proposed in this paper. It is composed of a fuzzy neural network (FNN) and a fault diagnosis element. FNN is able to deal with the information of the expert knowledge and the experiment data efficiently. It also has the ability to approximate any smooth system. FNN is used to identify the fault diagnosis model of MCFC stack. The fuzzy fault decision element can diagnose the state of the MCFC generating system, normal or fault, and can decide the type of the fault based on the outputs of FNN model and the MCFC system. Some simulation experiment results are demonstrated in this paper.

  5. Tsunamigenic earthquakes in the Gulf of Cadiz: fault model and recurrence

    Directory of Open Access Journals (Sweden)

    L. M. Matias

    2013-01-01

    Full Text Available The Gulf of Cadiz, as part of the Azores-Gibraltar plate boundary, is recognized as a potential source of big earthquakes and tsunamis that may affect the bordering countries, as occurred on 1 November 1755. Preparing for the future, Portugal is establishing a national tsunami warning system in which the threat caused by any large-magnitude earthquake in the area is estimated from a comprehensive database of scenarios. In this paper we summarize the knowledge about the active tectonics in the Gulf of Cadiz and integrate the available seismological information in order to propose the generation model of destructive tsunamis to be applied in tsunami warnings. The fault model derived is then used to estimate the recurrence of large earthquakes using the fault slip rates obtained by Cunha et al. (2012 from thin-sheet neotectonic modelling. Finally we evaluate the consistency of seismicity rates derived from historical and instrumental catalogues with the convergence rates between Eurasia and Nubia given by plate kinematic models.

  6. Analogue Modeling of Oblique Convergent Strike-Slip Faulting and Application to The Seram Island, Eastern Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2014-12-01

    Full Text Available DOI:10.17014/ijog.v1i3.189Sandbox experiment is one of the types of analogue modeling in geological sciences in which the main purpose is simulating deformation style and structural evolution of the sedimentary basin.  Sandbox modeling is one of the effective ways in conducting physically modeling and evaluates complex deformation of sedimentary rocks. The main purpose of this paper is to evaluate structural geometry and deformation history of oblique convergent deformation using of integrated technique of analogue sandbox modeling applying to deformation of Seram Fold-Thrust-Belt (SFTB in the Seram Island, Eastern Indonesia. Oblique convergent strike-slip deformation has notoriously generated area with structural complex geometry and pattern resulted from role of various local parameters that control stress distributions. Therefore, a special technique is needed for understanding and solving such problem in particular to relate 3D fault geometry and its evolution. The result of four case (Case 1 to 4 modeling setting indicated that two of modeling variables clearly affected in our sandbox modeling results; these are lithological variation (mainly stratigraphy of Seram Island and pre-existing basement fault geometry (basement configuration. Lithological variation was mainly affected in the total number of faults development.  On the other hand, pre-existing basement fault geometry was highly influenced in the end results particularly fault style and pattern as demonstrated in Case 4 modeling.  In addition, this study concluded that deformation in the Seram Island is clearly best described using oblique convergent strike-slip (transpression stress system.

  7. Constellations of Next Generation Gravity Missions: Simulations regarding optimal orbits and mitigation of aliasing errors

    Science.gov (United States)

    Hauk, M.; Pail, R.; Gruber, T.; Purkhauser, A.

    2017-12-01

    The CHAMP and GRACE missions have demonstrated the tremendous potential for observing mass changes in the Earth system from space. In order to fulfil future user needs a monitoring of mass distribution and mass transport with higher spatial and temporal resolution is required. This can be achieved by a Bender-type Next Generation Gravity Mission (NGGM) consisting of a constellation of satellite pairs flying in (near-)polar and inclined orbits, respectively. For these satellite pairs the observation concept of the GRACE Follow-on mission with a laser-based low-low satellite-to-satellite tracking (ll-SST) system and more precise accelerometers and state-of-the-art star trackers is adopted. By choosing optimal orbit constellations for these satellite pairs high frequency mass variations will be observable and temporal aliasing errors from under-sampling will not be the limiting factor anymore. As part of the European Space Agency (ESA) study "ADDCON" (ADDitional CONstellation and Scientific Analysis Studies of the Next Generation Gravity Mission) a variety of mission design parameters for such constellations are investigated by full numerical simulations. These simulations aim at investigating the impact of several orbit design choices and at the mitigation of aliasing errors in the gravity field retrieval by co-parametrization for various constellations of Bender-type NGGMs. Choices for orbit design parameters such as altitude profiles during mission lifetime, length of retrieval period, value of sub-cycles and choice of prograde versus retrograde orbits are investigated as well. Results of these simulations are presented and optimal constellations for NGGM's are identified. Finally, a short outlook towards new geophysical applications like a near real time service for hydrology is given.

  8. Fault Detection for Automotive Shock Absorber

    Science.gov (United States)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  9. A New Paradigm For Modeling Fault Zone Inelasticity: A Multiscale Continuum Framework Incorporating Spontaneous Localization and Grain Fragmentation.

    Science.gov (United States)

    Elbanna, A. E.

    2015-12-01

    The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.

  10. V and V-based remaining fault estimation model for safety–critical software of a nuclear power plant

    International Nuclear Information System (INIS)

    Eom, Heung-seop; Park, Gee-yong; Jang, Seung-cheol; Son, Han Seong; Kang, Hyun Gook

    2013-01-01

    Highlights: ► A software fault estimation model based on Bayesian Nets and V and V. ► Use of quantified data derived from qualitative V and V results. ► Faults insertion and elimination process was modeled in the context of probability. ► Systematically estimates the expected number of remaining faults. -- Abstract: Quantitative software reliability measurement approaches have some limitations in demonstrating the proper level of reliability in cases of safety–critical software. One of the more promising alternatives is the use of software development quality information. Particularly in the nuclear industry, regulatory bodies in most countries use both probabilistic and deterministic measures for ensuring the reliability of safety-grade digital computers in NPPs. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety–critical software. In addition software Verification and Validation (V and V) play an important role in this process. In this light, we propose a V and V-based fault estimation method using Bayesian Nets to estimate the remaining faults for safety–critical software after the software development life cycle is completed. By modeling the fault insertion and elimination processes during the whole development phases, the proposed method systematically estimates the expected number of remaining faults.

  11. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    Science.gov (United States)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  12. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  13. Low-frequency scaling applied to stochastic finite-fault modeling

    Science.gov (United States)

    Crane, Stephen; Motazedian, Dariush

    2014-01-01

    Stochastic finite-fault modeling is an important tool for simulating moderate to large earthquakes. It has proven to be useful in applications that require a reliable estimation of ground motions, mostly in the spectral frequency range of 1 to 10 Hz, which is the range of most interest to engineers. However, since there can be little resemblance between the low-frequency spectra of large and small earthquakes, this portion can be difficult to simulate using stochastic finite-fault techniques. This paper introduces two different methods to scale low-frequency spectra for stochastic finite-fault modeling. One method multiplies the subfault source spectrum by an empirical function. This function has three parameters to scale the low-frequency spectra: the level of scaling and the start and end frequencies of the taper. This empirical function adjusts the earthquake spectra only between the desired frequencies, conserving seismic moment in the simulated spectra. The other method is an empirical low-frequency coefficient that is added to the subfault corner frequency. This new parameter changes the ratio between high and low frequencies. For each simulation, the entire earthquake spectra is adjusted, which may result in the seismic moment not being conserved for a simulated earthquake. These low-frequency scaling methods were used to reproduce recorded earthquake spectra from several earthquakes recorded in the Pacific Earthquake Engineering Research Center (PEER) Next Generation Attenuation Models (NGA) database. There were two methods of determining the stochastic parameters of best fit for each earthquake: a general residual analysis and an earthquake-specific residual analysis. Both methods resulted in comparable values for stress drop and the low-frequency scaling parameters; however, the earthquake-specific residual analysis obtained a more accurate distribution of the averaged residuals.

  14. Ring-fault activity at subsiding calderas studied from analogue experiments and numerical modeling

    Science.gov (United States)

    Liu, Y. K.; Ruch, J.; Vasyura-Bathke, H.; Jonsson, S.

    2017-12-01

    Several subsiding calderas, such as the ones in the Galápagos archipelago and the Axial seamount in the Pacific Ocean have shown a complex but similar ground deformation pattern, composed of a broad deflation signal affecting the entire volcanic edifice and of a localized subsidence signal focused within the caldera. However, it is still debated how deep processes at subsiding calderas, including magmatic pressure changes, source locations and ring-faulting, relate to this observed surface deformation pattern. We combine analogue sandbox experiments with numerical modeling to study processes involved from initial subsidence to later collapse of calderas. The sandbox apparatus is composed of a motor driven subsiding half-piston connected to the bottom of a glass box. During the experiments the observation is done by five digital cameras photographing from various perspectives. We use Photoscan, a photogrammetry software and PIVLab, a time-resolved digital image correlation tool, to retrieve time-series of digital elevation models and velocity fields from acquired photographs. This setup allows tracking the processes acting both at depth and at the surface, and to assess their relative importance as the subsidence evolves to a collapse. We also use the Boundary Element Method to build a numerical model of the experiment setup, which comprises contracting sill-like source in interaction with a ring-fault in elastic half-space. We then compare our results from these two approaches with the examples observed in nature. Our preliminary experimental and numerical results show that at the initial stage of magmatic withdrawal, when the ring-fault is not yet well formed, broad and smooth deflation dominates at the surface. As the withdrawal increases, narrower subsidence bowl develops accompanied by the upward propagation of the ring-faulting. This indicates that the broad deflation, affecting the entire volcano edifice, is primarily driven by the contraction of the

  15. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    Science.gov (United States)

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability

  16. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model.

    Science.gov (United States)

    Daout, S; Barbot, S; Peltzer, G; Doin, M-P; Liu, Z; Jolivet, R

    2016-11-16

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  17. Application of damping mechanism model and stacking fault probability in Fe-Mn alloy

    International Nuclear Information System (INIS)

    Huang, S.K.; Wen, Y.H.; Li, N.; Teng, J.; Ding, S.; Xu, Y.G.

    2008-01-01

    In this paper, the damping mechanism model of Fe-Mn alloy was analyzed using dislocation theory. Moreover, as an important parameter in Fe-Mn based alloy, the effect of stacking fault probability on the damping capacity of Fe-19.35Mn alloy after deep-cooling or tensile deformation was also studied. The damping capacity was measured using reversal torsion pendulum. The stacking fault probability of γ-austenite and ε-martensite was determined by means of X-ray diffraction (XRD) profile analysis. The microstructure was observed using scanning electronic microscope (SEM). The results indicated that with the strain amplitude increasing above a critical value, the damping capacity of Fe-19.35Mn alloy increased rapidly which could be explained using the breakaway model of Shockley partial dislocations. Deep-cooling and suitable tensile deformation could improve the damping capacity owning to the increasing of stacking fault probability of Fe-19.35Mn alloy

  18. Stress near geometrically complex strike-slip faults - Application to the San Andreas fault at Cajon Pass, southern California

    Science.gov (United States)

    Saucier, Francois; Humphreys, Eugene; Weldon, Ray, II

    1992-01-01

    A model is presented to rationalize the state of stress near a geometrically complex major strike-slip fault. Slip on such a fault creates residual stresses that, with the occurrence of several slip events, can dominate the stress field near the fault. The model is applied to the San Andreas fault near Cajon Pass. The results are consistent with the geological features, seismicity, the existence of left-lateral stress on the Cleghorn fault, and the in situ stress orientation in the scientific well, found to be sinistral when resolved on a plane parallel to the San Andreas fault. It is suggested that the creation of residual stresses caused by slip on a wiggle San Andreas fault is the dominating process there.

  19. A Method to Quantify Plant Availability and Initiating Event Frequency Using a Large Event Tree, Small Fault Tree Model

    International Nuclear Information System (INIS)

    Kee, Ernest J.; Sun, Alice; Rodgers, Shawn; Popova, ElmiraV; Nelson, Paul; Moiseytseva, Vera; Wang, Eric

    2006-01-01

    South Texas Project uses a large fault tree to produce scenarios (minimal cut sets) used in quantification of plant availability and event frequency predictions. On the other hand, the South Texas Project probabilistic risk assessment model uses a large event tree, small fault tree for quantifying core damage and radioactive release frequency predictions. The South Texas Project is converting its availability and event frequency model to use a large event tree, small fault in an effort to streamline application support and to provide additional detail in results. The availability and event frequency model as well as the applications it supports (maintenance and operational risk management, system engineering health assessment, preventive maintenance optimization, and RIAM) are briefly described. A methodology to perform availability modeling in a large event tree, small fault tree framework is described in detail. How the methodology can be used to support South Texas Project maintenance and operations risk management is described in detail. Differences with other fault tree methods and other recently proposed methods are discussed in detail. While the methods described are novel to the South Texas Project Risk Management program and to large event tree, small fault tree models, concepts in the area of application support and availability modeling have wider applicability to the industry. (authors)

  20. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  1. Heterogeneous slip and rupture models of the San Andreas fault zone based upon three-dimensional earthquake tomography

    Energy Technology Data Exchange (ETDEWEB)

    Foxall, William [Univ. of California, Berkeley, CA (United States)

    1992-11-01

    Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.

  2. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    Science.gov (United States)

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  3. Empirical Relationships Among Magnitude and Surface Rupture Characteristics of Strike-Slip Faults: Effect of Fault (System) Geometry and Observation Location, Dervided From Numerical Modeling

    Science.gov (United States)

    Zielke, O.; Arrowsmith, J.

    2007-12-01

    In order to determine the magnitude of pre-historic earthquakes, surface rupture length, average and maximum surface displacement are utilized, assuming that an earthquake of a specific size will cause surface features of correlated size. The well known Wells and Coppersmith (1994) paper and other studies defined empirical relationships between these and other parameters, based on historic events with independently known magnitude and rupture characteristics. However, these relationships show relatively large standard deviations and they are based only on a small number of events. To improve these first-order empirical relationships, the observation location relative to the rupture extent within the regional tectonic framework should be accounted for. This however cannot be done based on natural seismicity because of the limited size of datasets on large earthquakes. We have developed the numerical model FIMozFric, based on derivations by Okada (1992) to create synthetic seismic records for a given fault or fault system under the influence of either slip- or stress boundary conditions. Our model features A) the introduction of an upper and lower aseismic zone, B) a simple Coulomb friction law, C) bulk parameters simulating fault heterogeneity, and D) a fault interaction algorithm handling the large number of fault patches (typically 5,000-10,000). The joint implementation of these features produces well behaved synthetic seismic catalogs and realistic relationships among magnitude and surface rupture characteristics which are well within the error of the results by Wells and Coppersmith (1994). Furthermore, we use the synthetic seismic records to show that the relationships between magntiude and rupture characteristics are a function of the observation location within the regional tectonic framework. The model presented here can to provide paleoseismologists with a tool to improve magnitude estimates from surface rupture characteristics, by incorporating the

  4. Correlation of data on strain accumulation adjacent to the San Andreas Fault with available models

    Science.gov (United States)

    Turcotte, Donald L.

    1986-01-01

    Theoretical and numerical studies of deformation on strike slip faults were performed and the results applied to geodetic observations performed in the vicinity of the San Andreas Fault in California. The initial efforts were devoted to an extensive series of finite element calculations of the deformation associated with cyclic displacements on a strike-slip fault. Measurements of strain accumulation adjacent to the San Andreas Fault indicate that the zone of strain accumulation extends only a few tens of kilometers away from the fault. There is a concern about the tendency to make geodetic observations along the line to the source. This technique has serious problems for strike slip faults since the vector velocity is also along the fault. Use of a series of stations lying perpendicular to the fault whose positions are measured relative to a reference station are suggested to correct the problem. The complexity of faulting adjacent to the San Andreas Fault indicated that the homogeneous elastic and viscoelastic approach to deformation had serious limitations. These limitation led to the proposal of an approach that assumes a fault is composed of a distribution of asperities and barriers on all scales. Thus, an earthquake on a fault is treated as a failure of a fractal tree. Work continued on the development of a fractal based model for deformation in the western United States. In order to better understand the distribution of seismicity on the San Andreas Fault system a fractal analog was developed. The fractal concept also provides a means of testing whether clustering in time or space is a scale-invariant process.

  5. Fault Detection for Shipboard Monitoring – Volterra Kernel and Hammerstein Model Approaches

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2009-01-01

    In this paper nonlinear fault detection for in-service monitoring and decision support systems for ships will be presented. The ship is described as a nonlinear system, and the stochastic wave elevation and the associated ship responses are conveniently modelled in frequency domain. The transform....... The transformation from time domain to frequency domain has been conducted by use of Volterra theory. The paper takes as an example fault detection of a containership on which a decision support system has been installed....

  6. Analysis of the fault geometry of a Cenozoic salt-related fault close to the D-1 well, Danish North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Roenoe Clausen, O.; Petersen, K.; Korstgaard, A.

    1995-12-31

    A normal detaching fault in the Norwegian-Danish Basin around the D-1 well (the D-1 faults) has been mapped using seismic sections. The fault has been analysed in detail by constructing backstripped-decompacted sections across the fault, contoured displacement diagrams along the fault, and vertical displacement maps. The result shows that the listric D-1 fault follows the displacement patterns for blind normal faults. Deviations from the ideal displacement pattern is suggested to be caused by salt-movements, which is the main driving mechanisms for the faulting. Zechstein salt moves primarily from the hanging wall to the footwall and is superposed by later minor lateral flow beneath the footwall. Back-stripping of depth-converted and decompacted sections results in an estimation of the salt-surface and the shape of the fault through time. This procedure then enables a simple modelling of the hanging wall deformation using a Chevron model with hanging wall collapse along dipping surfaces. The modelling indicates that the fault follows the salt surface until the Middle Miocene after which the offset on the fault also may be accommodated along the Top Chalk surface. (au) 16 refs.

  7. Fault Tolerant Wind Farm Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2013-01-01

    In the recent years the wind turbine industry has focused on optimizing the cost of energy. One of the important factors in this is to increase reliability of the wind turbines. Advanced fault detection, isolation and accommodation are important tools in this process. Clearly most faults are deal...... scenarios. This benchmark model is used in an international competition dealing with Wind Farm fault detection and isolation and fault tolerant control....

  8. Integrated 3D Reservoir/Fault Property Modelling Aided Well Planning and Improved Hydrocarbon Recovery in a Niger Delta Field

    International Nuclear Information System (INIS)

    Onyeagoro, U. O.; Ebong, U. E.; Nworie, E. A.

    2002-01-01

    The large and varied portfolio of assets managed by oil companies requires quick decision-making and the deployment of best in class technologies in asset management. Timely decision making and the application of the best technologies in reservoir management are however sometimes in conflict due to large time requirements of the latter.Optimizing the location of development wells is critical to account for variable fluid contact movements and pressure interference effects between wells, which can be significant because of the high permeability (Darcy range) of Niger Delta reservoirs. With relatively high drilling costs, the optimization of well locations necessitates a good realistic static and dynamic 3D reservoir description, especially in the recovery of remaining oil and oil rim type of reservoirs.A detailed 3D reservoir model with fault properties was constructed for a Niger delta producing field. This involved the integration of high quality 3D seismic, core, petrophysics, reservoir engineering, production and structural geology data to construct a realistic 3D reservoir/fault property model for the field. The key parameters considered during the construction of the internal architecture of the model were the vertical and horizontal reservoir heterogeneities-this controls the fluid flow within the reservoir. In the production realm, the fault thickness and fault permeabilities are factors that control the impedance of fluid flow across the fault-fault transmissibility. These key internal and external reservoir/structural variables were explicitly modeled in a 3D modeling software to produce different realizations and manage the uncertainties.The resulting 3D reservoir/fault property model was upscaled for simulation purpose such that grid blocks along the fault planes have realistic transmissibility multipliers of 0 to 1 attached to them. The model was also used in the well planner to optimize the positioning of a high angle deviated well that penetrated

  9. Model-based fault detection and isolation of a PWR nuclear power plant using neural networks

    International Nuclear Information System (INIS)

    Far, R.R.; Davilu, H.; Lucas, C.

    2008-01-01

    The proper and timely fault detection and isolation of industrial plant is of premier importance to guarantee the safe and reliable operation of industrial plants. The paper presents application of a neural networks-based scheme for fault detection and isolation, for the pressurizer of a PWR nuclear power plant. The scheme is constituted by 2 components: residual generation and fault isolation. The first component generates residuals via the discrepancy between measurements coming from the plant and a nominal model. The neutral network estimator is trained with healthy data collected from a full-scale simulator. For the second component detection thresholds are used to encode the residuals as bipolar vectors which represent fault patterns. These patterns are stored in an associative memory based on a recurrent neutral network. The proposed fault diagnosis tool is evaluated on-line via a full-scale simulator detected and isolate the main faults appearing in the pressurizer of a PWR. (orig.)

  10. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  11. Fault Detection of Reciprocating Compressors using a Model from Principles Component Analysis of Vibrations

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A D

    2012-01-01

    Traditional vibration monitoring techniques have found it difficult to determine a set of effective diagnostic features due to the high complexity of the vibration signals originating from the many different impact sources and wide ranges of practical operating conditions. In this paper Principal Component Analysis (PCA) is used for selecting vibration feature and detecting different faults in a reciprocating compressor. Vibration datasets were collected from the compressor under baseline condition and five common faults: valve leakage, inter-cooler leakage, suction valve leakage, loose drive belt combined with intercooler leakage and belt loose drive belt combined with suction valve leakage. A model using five PCs has been developed using the baseline data sets and the presence of faults can be detected by comparing the T 2 and Q values from the features of fault vibration signals with corresponding thresholds developed from baseline data. However, the Q -statistic procedure produces a better detection as it can separate the five faults completely.

  12. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  13. Fault Tolerant Control of Wind Turbines

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Kinnaert, Michel

    2013-01-01

    This paper presents a test benchmark model for the evaluation of fault detection and accommodation schemes. This benchmark model deals with the wind turbine on a system level, and it includes sensor, actuator, and system faults, namely faults in the pitch system, the drive train, the generator......, and the converter system. Since it is a system-level model, converter and pitch system models are simplified because these are controlled by internal controllers working at higher frequencies than the system model. The model represents a three-bladed pitch-controlled variable-speed wind turbine with a nominal power...

  14. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  15. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    Science.gov (United States)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  16. A dependability modeling of software under memory faults for digital system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, J. G.; Seong, P. H.

    1997-01-01

    In this work, an analytic approach to the dependability of software in the operational phase is suggested with special attention to the hardware fault effects on the software behavior : The hardware faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory which represents the software with graph composed of nodes and arcs. Through proper transformation, the graph can be reduced to a simple two-node graph and the software reliability is derived from this graph. Using this model, we predict the reliability of an application software in the digital system (ILS) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. We also found that the effects of the hardware faults on the software failure should be considered for predicting the software dependability accurately in operation phase, especially for the software which is executed frequently. This modeling method is particularly attractive for the medium size programs such as the microprocessor-based nuclear safety logic program. (author)

  17. From Geodetic Imaging of Seismic and Aseismic Fault Slip to Dynamic Modeling of the Seismic Cycle

    Science.gov (United States)

    Avouac, Jean-Philippe

    2015-05-01

    Understanding the partitioning of seismic and aseismic fault slip is central to seismotectonics as it ultimately determines the seismic potential of faults. Thanks to advances in tectonic geodesy, it is now possible to develop kinematic models of the spatiotemporal evolution of slip over the seismic cycle and to determine the budget of seismic and aseismic slip. Studies of subduction zones and continental faults have shown that aseismic creep is common and sometimes prevalent within the seismogenic depth range. Interseismic coupling is generally observed to be spatially heterogeneous, defining locked patches of stress accumulation, to be released in future earthquakes or aseismic transients, surrounded by creeping areas. Clay-rich tectonites, high temperature, and elevated pore-fluid pressure seem to be key factors promoting aseismic creep. The generally logarithmic time evolution of afterslip is a distinctive feature of creeping faults that suggests a logarithmic dependency of fault friction on slip rate, as observed in laboratory friction experiments. Most faults can be considered to be paved with interlaced patches where the friction law is either rate-strengthening, inhibiting seismic rupture propagation, or rate-weakening, allowing for earthquake nucleation. The rate-weakening patches act as asperities on which stress builds up in the interseismic period; they might rupture collectively in a variety of ways. The pattern of interseismic coupling can help constrain the return period of the maximum- magnitude earthquake based on the requirement that seismic and aseismic slip sum to match long-term slip. Dynamic models of the seismic cycle based on this conceptual model can be tuned to reproduce geodetic and seismological observations. The promise and pitfalls of using such models to assess seismic hazard are discussed.

  18. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    Science.gov (United States)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  19. An Analytical Model for Assessing Stability of Pre-Existing Faults in Caprock Caused by Fluid Injection and Extraction in a Reservoir

    Science.gov (United States)

    Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin

    2016-07-01

    Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.

  20. Effect of Pore Pressure on Slip Failure of an Impermeable Fault: A Coupled Micro Hydro-Geomechanical Model

    Science.gov (United States)

    Yang, Z.; Juanes, R.

    2015-12-01

    The geomechanical processes associated with subsurface fluid injection/extraction is of central importance for many industrial operations related to energy and water resources. However, the mechanisms controlling the stability and slip motion of a preexisting geologic fault remain poorly understood and are critical for the assessment of seismic risk. In this work, we develop a coupled hydro-geomechanical model to investigate the effect of fluid injection induced pressure perturbation on the slip behavior of a sealing fault. The model couples single-phase flow in the pores and mechanics of the solid phase. Granular packs (see example in Fig. 1a) are numerically generated where the grains can be either bonded or not, depending on the degree of cementation. A pore network is extracted for each granular pack with pore body volumes and pore throat conductivities calculated rigorously based on geometry of the local pore space. The pore fluid pressure is solved via an explicit scheme, taking into account the effect of deformation of the solid matrix. The mechanics part of the model is solved using the discrete element method (DEM). We first test the validity of the model with regard to the classical one-dimensional consolidation problem where an analytical solution exists. We then demonstrate the ability of the coupled model to reproduce rock deformation behavior measured in triaxial laboratory tests under the influence of pore pressure. We proceed to study the fault stability in presence of a pressure discontinuity across the impermeable fault which is implemented as a plane with its intersected pore throats being deactivated and thus obstructing fluid flow (Fig. 1b, c). We focus on the onset of shear failure along preexisting faults. We discuss the fault stability criterion in light of the numerical results obtained from the DEM simulations coupled with pore fluid flow. The implication on how should faults be treated in a large-scale continuum model is also presented.

  1. Faults, fluids and friction : effect of pressure solution and phyllosilicates on fault slip behaviour, with implications for crustal rheology

    NARCIS (Netherlands)

    Bos, B.

    2000-01-01

    In order to model the mechanics of motion and earthquake generation on large crustal fault zones, a quantitative description of the rheology of fault zones is prerequisite. In the past decades, crustal strength has been modeled using a brittle or frictional failure law to represent fault slip

  2. Simulation of Co-Seismic Off-Fault Stress Effects: Influence of Fault Roughness and Pore Pressure Coupling

    Science.gov (United States)

    Fälth, B.; Lund, B.; Hökmark, H.

    2017-12-01

    Aiming at improved safety assessment of geological nuclear waste repositories, we use dynamic 3D earthquake simulations to estimate the potential for co-seismic off-fault distributed fracture slip. Our model comprises a 12.5 x 8.5 km strike-slip fault embedded in a full space continuum where we apply a homogeneous initial stress field. In the reference case (Case 1) the fault is planar and oriented optimally for slip, given the assumed stress field. To examine the potential impact of fault roughness, we also study cases where the fault surface has undulations with self-similar fractal properties. In both the planar and the undulated cases the fault has homogeneous frictional properties. In a set of ten rough fault models (Case 2), the fault friction is equal to that of Case 1, meaning that these models generate lower seismic moments than Case 1. In another set of ten rough fault models (Case 3), the fault dynamic friction is adjusted such that seismic moments on par with that of Case 1 are generated. For the propagation of the earthquake rupture we adopt the linear slip-weakening law and obtain Mw 6.4 in Case 1 and Case 3, and Mw 6.3 in Case 2 (35 % lower moment than Case 1). During rupture we monitor the off-fault stress evolution along the fault plane at 250 m distance and calculate the corresponding evolution of the Coulomb Failure Stress (CFS) on optimally oriented hypothetical fracture planes. For the stress-pore pressure coupling, we assume Skempton's coefficient B = 0.5 as a base case value, but also examine the sensitivity to variations of B. We observe the following: (I) The CFS values, and thus the potential for fracture slip, tend to increase with the distance from the hypocenter. This is in accordance with results by other authors. (II) The highest CFS values are generated by quasi-static stress concentrations around fault edges and around large scale fault bends, where we obtain values of the order of 10 MPa. (III) Locally, fault roughness may have a

  3. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    Science.gov (United States)

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to

  4. A Hamiltonian Approach to Fault Isolation in a Planar Vertical Take–Off and Landing Aircraft Model

    Directory of Open Access Journals (Sweden)

    Rodriguez-Alfaro Luis H.

    2015-03-01

    Full Text Available The problem of fault detection and isolation in a class of nonlinear systems having a Hamiltonian representation is considered. In particular, a model of a planar vertical take-off and landing aircraft with sensor and actuator faults is studied. A Hamiltonian representation is derived from an Euler-Lagrange representation of the system model considered. In this form, nonlinear decoupling is applied in order to obtain subsystems with (as much as possible specific fault sensitivity properties. The resulting decoupled subsystem is represented as a Hamiltonian system and observer-based residual generators are designed. The results are presented through simulations to show the effectiveness of the proposed approach.

  5. Which Fault Orientations Occur during Oblique Rifting? Combining Analog and Numerical 3d Models with Observations from the Gulf of Aden

    Science.gov (United States)

    Autin, J.; Brune, S.

    2013-12-01

    Oblique rift systems like the Gulf of Aden are intrinsically three-dimensional. In order to understand the evolution of these systems, one has to decode the fundamental mechanical similarities of oblique rifts. One way to accomplish this, is to strip away the complexity that is generated by inherited fault structures. In doing so, we assume a laterally homogeneous segment of Earth's lithosphere and ask how many different fault populations are generated during oblique extension inbetween initial deformation and final break-up. We combine results of an analog and a numerical model that feature a 3D segment of a layered lithosphere. In both cases, rift evolution is recorded quantitatively in terms of crustal fault geometries. For the numerical model, we adopt a novel post-processing method that allows to infer small-scale crustal fault orientation from the surface stress tensor. Both models involve an angle of 40 degrees between the rift normal and the extensional direction which allows comparison to the Gulf of Aden rift system. The resulting spatio-temporal fault pattern of our models shows three normal fault orientations: rift-parallel, extension-orthogonal, and intermediate, i.e. with a direction inbetween the two previous orientations. The rift evolution involves three distinct phases: (i) During the initial rift phase, wide-spread faulting with intermediate orientation occurs. (ii) Advanced lithospheric necking enables rift-parallel normal faulting at the rift flanks, while strike-slip faulting in the central part of the rift system indicates strain partitioning. (iii) During continental break-up, displacement-orthogonal as well as intermediate faults occur. We compare our results to the structural evolution of the Eastern Gulf of Aden. External parts of the rift exhibit intermediate and displacement-orthogonal faults while rift-parallel faults are present at the rift borders. The ocean-continent transition mainly features intermediate and displacement

  6. Research on Model-Based Fault Diagnosis for a Gas Turbine Based on Transient Performance

    Directory of Open Access Journals (Sweden)

    Detang Zeng

    2018-01-01

    Full Text Available It is essential to monitor and to diagnose faults in rotating machinery with a high thrust–weight ratio and complex structure for a variety of industrial applications, for which reliable signal measurements are required. However, the measured values consist of the true values of the parameters, the inertia of measurements, random errors and systematic errors. Such signals cannot reflect the true performance state and the health state of rotating machinery accurately. High-quality, steady-state measurements are necessary for most current diagnostic methods. Unfortunately, it is hard to obtain these kinds of measurements for most rotating machinery. Diagnosis based on transient performance is a useful tool that can potentially solve this problem. A model-based fault diagnosis method for gas turbines based on transient performance is proposed in this paper. The fault diagnosis consists of a dynamic simulation model, a diagnostic scheme, and an optimization algorithm. A high-accuracy, nonlinear, dynamic gas turbine model using a modular modeling method is presented that involves thermophysical properties, a component characteristic chart, and system inertial. The startup process is simulated using this model. The consistency between the simulation results and the field operation data shows the validity of the model and the advantages of transient accumulated deviation. In addition, a diagnostic scheme is designed to fulfill this process. Finally, cuckoo search is selected to solve the optimization problem in fault diagnosis. Comparative diagnostic results for a gas turbine before and after washing indicate the improved effectiveness and accuracy of the proposed method of using data from transient processes, compared with traditional methods using data from the steady state.

  7. Influence of fault asymmetric dislocation on the gravity changes

    Directory of Open Access Journals (Sweden)

    Duan Hurong

    2014-08-01

    Full Text Available A fault is a planar fracture or discontinuity in a volume of rock, across which there has been significant displacement along the fractures as a result of earth movement. Large faults within the Earth’s crust result from the action of plate tectonic forces, with the largest forming the boundaries between the plates, energy release associated with rapid movement on active faults is the cause of most earthquakes. The relationship between unevenness dislocation and gravity changes was studied on the theoretical thought of differential fault. Simulated observation values were adopted to deduce the gravity changes with the model of asymmetric fault and the model of Okada, respectively. The characteristic of unevennes fault momentum distribution is from two end points to middle by 0 according to a certain continuous functional increase. However, the fault momentum distribution in the fault length range is a constant when the Okada model is adopted. Numerical simulation experiments for the activities of the strike-slip fault, dip-slip fault and extension fault were carried out, respectively, to find that both the gravity contours and the gravity variation values are consistent when either of the two models is adopted. The apparent difference lies in that the values at the end points are 17. 97% for the strike-slip fault, 25. 58% for the dip-slip fault, and 24. 73% for the extension fault.

  8. Fault diagnosis for engine air path with neural models and classifier ...

    African Journals Online (AJOL)

    A new FDI scheme is developed for automotive engines in this paper. The method uses an independent radial basis function (RBF) neural ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults considered are 10-20% changes ...

  9. Using an Earthquake Simulator to Model Tremor Along a Strike Slip Fault

    Science.gov (United States)

    Cochran, E. S.; Richards-Dinger, K. B.; Kroll, K.; Harrington, R. M.; Dieterich, J. H.

    2013-12-01

    We employ the earthquake simulator, RSQSim, to investigate the conditions under which tremor occurs in the transition zone of the San Andreas fault. RSQSim is a computationally efficient method that uses rate- and state- dependent friction to simulate a wide range of event sizes for long time histories of slip [Dieterich and Richards-Dinger, 2010; Richards-Dinger and Dieterich, 2012]. RSQSim has been previously used to investigate slow slip events in Cascadia [Colella et al., 2011; 2012]. Earthquakes, tremor, slow slip, and creep occurrence are primarily controlled by the rate and state constants a and b and slip speed. We will report the preliminary results of using RSQSim to vary fault frictional properties in order to better understand rupture dynamics in the transition zone using observed characteristics of tremor along the San Andreas fault. Recent studies of tremor along the San Andreas fault provide information on tremor characteristics including precise locations, peak amplitudes, duration of tremor episodes, and tremor migration. We use these observations to constrain numerical simulations that examine the slip conditions in the transition zone of the San Andreas Fault. Here, we use the earthquake simulator, RSQSim, to conduct multi-event simulations of tremor for a strike slip fault modeled on Cholame section of the San Andreas fault. Tremor was first observed on the San Andreas fault near Cholame, California near the southern edge of the 2004 Parkfield rupture [Nadeau and Dolenc, 2005]. Since then, tremor has been observed across a 150 km section of the San Andreas with depths between 16-28 km and peak amplitudes that vary by a factor of 7 [Shelly and Hardebeck, 2010]. Tremor episodes, comprised of multiple low frequency earthquakes (LFEs), tend to be relatively short, lasting tens of seconds to as long as 1-2 hours [Horstmann et al., in review, 2013]; tremor occurs regularly with some tremor observed almost daily [Shelly and Hardebeck, 2010; Horstmann

  10. Faults, fluids and friction : Effect of pressure solution and phyllosilicates on fault slip behaviour, with implications for crustal rheology

    NARCIS (Netherlands)

    Bos, B.

    2000-01-01

    In order to model the mechanics of motion and earthquake generation on large crustal fault zones, a quantitative description of the rheology of fault zones is prerequisite. In the past decades, crustal strength has been modeled using a brittle or frictional failure law to represent fault slip at

  11. Fault tree graphics

    International Nuclear Information System (INIS)

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  12. Phantom investigation of 3D motion-dependent volume aliasing during CT simulation for radiation therapy planning

    International Nuclear Information System (INIS)

    Tanyi, James A; Fuss, Martin; Varchena, Vladimir; Lancaster, Jack L; Salter, Bill J

    2007-01-01

    To quantify volumetric and positional aliasing during non-gated fast- and slow-scan acquisition CT in the presence of 3D target motion. Single-slice fast, single-slice slow, and multi-slice fast scan helical CTs were acquired of dynamic spherical targets (1 and 3.15 cm in diameter), embedded in an anthropomorphic phantom. 3D target motions typical of clinically observed tumor motion parameters were investigated. Motion excursions included ± 5, ± 10, and ± 15 mm displacements in the S-I direction synchronized with constant displacements of ± 5 and ± 2 mm in the A-P and lateral directions, respectively. For each target, scan technique, and motion excursion, eight different initial motion-to-scan phase relationships were investigated. An anticipated general trend of target volume overestimation was observed. The mean percentage overestimation of the true physical target volume typically increased with target motion amplitude and decreasing target diameter. Slow-scan percentage overestimations were larger, and better approximated the time-averaged motion envelope, as opposed to fast-scans. Motion induced centroid misrepresentation was greater in the S-I direction for fast-scan techniques, and transaxial direction for the slow-scan technique. Overestimation is fairly uniform for slice widths < 5 mm, beyond which there is gross overestimation. Non-gated CT imaging of targets describing clinically relevant, 3D motion results in aliased overestimation of the target volume and misrepresentation of centroid location, with little or no correlation between the physical target geometry and the CT-generated target geometry. Slow-scan techniques are a practical method for characterizing time-averaged target position. Fast-scan techniques provide a more reliable, albeit still distorted, target margin

  13. Thermal-hydraulic modeling of deaerator and fault detection and diagnosis of measurement sensor

    International Nuclear Information System (INIS)

    Lee, Jung Woon; Park, Jae Chang; Kim, Jung Taek; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In

    2003-05-01

    It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the deaerator and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm using Adaptive Estimator(AE) algorithm. The other is input-output model-based FDD algorithm using ART neural network. Extensive computer simulations for the real data obtained from Younggwang 3 and 4 FSAR are carried out to evaluate the performance in terms of speed and accuracy

  14. An architecture for fault tolerant controllers

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2005-01-01

    degradation in the sense of guaranteed degraded performance. A number of fault diagnosis problems, fault tolerant control problems, and feedback control with fault rejection problems are formulated/considered, mainly from a fault modeling point of view. The method is illustrated on a servo example including......A general architecture for fault tolerant control is proposed. The architecture is based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBK parameterization to quantify the performance of the fault tolerant system. The approach suggested can be applied...

  15. Resistivity structure of Sumatran Fault (Aceh segment) derived from 1-D magnetotelluric modeling

    Science.gov (United States)

    Nurhasan, Sutarno, D.; Bachtiar, H.; Sugiyanto, D.; Ogawa, Y.; Kimata, F.; Fitriani, D.

    2012-06-01

    Sumatran Fault Zone is the most active fault in Indonesia as a result of strike-slip component of Indo-Australian oblique convergence. With the length of 1900 km, Sumatran fault was divided into 20 segments starting from the southernmost Sumatra Island having small slip rate and increasing to the north end of Sumatra Island. There are several geophysical methods to analyze fault structure depending on physical parameter used in these methods, such as seismology, geodesy and electromagnetic. Magnetotelluric method which is one of geophysical methods has been widely used in mapping and sounding resistivity distribution because it does not only has the ability for detecting contras resistivity but also has a penetration range up to hundreds of kilometers. Magnetotelluric survey was carried out in Aceh region with the 12 total sites crossing Sumatran Fault on Aceh and Seulimeum segments. Two components of electric and magnetic fields were recorded during 10 hours in average with the frequency range from 320 Hz to 0,01 Hz. Analysis of the pseudosection of phase and apparent resistivity exhibit vertical low phase flanked on the west and east by high phase describing the existence of resistivity contras in this region. Having rotated the data to N45°E direction, interpretation of the result has been performed using three different methods of 1D MT modeling i.e. Bostick inversion, 1D MT inversion of TM data, and 1D MT inversion of the impedance determinant. By comparison, we concluded that the use of TM data only and the impedance determinant in 1D inversion yield the more reliable resistivity structure of the fault compare to other methods. Based on this result, it has been shown clearly that Sumatra Fault is characterized by vertical contras resistivity indicating the existence of Aceh and Seulimeum faults which has a good agreement with the geological data.

  16. Data-driven fault mechanics: Inferring fault hydro-mechanical properties from in situ observations of injection-induced aseismic slip

    Science.gov (United States)

    Bhattacharya, P.; Viesca, R. C.

    2017-12-01

    In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent

  17. Faults architecture and growth in clay-limestone alternation. Examples in the S-E Basin alternations (France) and numerical modeling

    International Nuclear Information System (INIS)

    Roche, Vincent

    2011-01-01

    The following work has been carried out in the framework of the studies conducted by IRSN in support of its safety evaluation of the geological disposal programme of high and intermediate level, long-lived radioactive waste. Such a disposal is planned to be hosted by the Callovian-Oxfordian indurate clay formation between two limestone formations in eastern Paris basin, France. Hypothetical faults may cross-cut this layered section, decreasing the clay containment ability by creating preferential pathways for radioactive solute towards limestones. This study aims at characterising the fault architecture and the normal fault growth in clay/limestone layered sections. Structural analysis and displacement profiles have been carried out in normal faults crossing several decimetres to metre thick sedimentary alternations in the South-Eastern Basin (France) and petrophysical properties have been determined for each layer. The studied faults are simple fault planes or complex fault zones showing are significantly controlled by the layering. The analysis of the fault characteristics and the results obtained on numerical models enlighten several processes such as fault nucleation, fault restriction, and fault growth through layered section. Some studied faults nucleated in the limestone layers, without using pre-existing fractures such as joints, and according to our numerical analysis, a strong stiffness, a low strength contrast between the limestone and the clay layer, and/or s a greater thickness of the clay layer are conditions which favour nucleation of faults in limestone. The range of mechanical properties leading to the fault nucleation in one layer type or another was investigated using a 3D modelling approach. After its nucleation, the fault propagates within a homogeneous medium with a constant displacement gradient until its vertical propagation is stopped by a restrictor. The evidenced restrictors are limestone-clay interfaces or faults in clays, sub

  18. Implementation of fuzzy modeling system for faults detection and diagnosis in three phase induction motor drive system

    Directory of Open Access Journals (Sweden)

    Shorouk Ossama Ibrahim

    2015-05-01

    Full Text Available Induction motors have been intensively utilized in industrial applications, mainly due to their efficiency and reliability. It is necessary that these machines work all the time with its high performance and reliability. So it is necessary to monitor, detect and diagnose different faults that these motors are facing. In this paper an intelligent fault detection and diagnosis for different faults of induction motor drive system is introduced. The stator currents and the time are introduced as inputs to the proposed fuzzy detection and diagnosis system. The direct torque control technique (DTC is adopted as a suitable control technique in the drive system especially, in traction applications, such as Electric Vehicles and Sub-Way Metro that used such a machine. An intelligent modeling technique is adopted as an identifier for different faults; the proposed model introduces the time as an important factor or variable that plays an important role either in fault detection or in decision making for suitable corrective action according to the type of the fault. Experimental results have been obtained to verify the efficiency of the proposed intelligent detector and identifier; a matching between the simulated and experimental results has been noticed.

  19. Statistical fault detection in photovoltaic systems

    KAUST Repository

    Garoudja, Elyes

    2017-05-08

    Faults in photovoltaic (PV) systems, which can result in energy loss, system shutdown or even serious safety breaches, are often difficult to avoid. Fault detection in such systems is imperative to improve their reliability, productivity, safety and efficiency. Here, an innovative model-based fault-detection approach for early detection of shading of PV modules and faults on the direct current (DC) side of PV systems is proposed. This approach combines the flexibility, and simplicity of a one-diode model with the extended capacity of an exponentially weighted moving average (EWMA) control chart to detect incipient changes in a PV system. The one-diode model, which is easily calibrated due to its limited calibration parameters, is used to predict the healthy PV array\\'s maximum power coordinates of current, voltage and power using measured temperatures and irradiances. Residuals, which capture the difference between the measurements and the predictions of the one-diode model, are generated and used as fault indicators. Then, the EWMA monitoring chart is applied on the uncorrelated residuals obtained from the one-diode model to detect and identify the type of fault. Actual data from the grid-connected PV system installed at the Renewable Energy Development Center, Algeria, are used to assess the performance of the proposed approach. Results show that the proposed approach successfully monitors the DC side of PV systems and detects temporary shading.

  20. Late quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    International Nuclear Information System (INIS)

    Brogan, G.E.; Kellogg, K.S.; Terhune, C.L.; Slemmons, D.B.

    1991-01-01

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest- trending pull-apart basin

  1. Fault Diagnosis and Fault Tolerant Control with Application on a Wind Turbine Low Speed Shaft Encoder

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Sardi, Hector Eloy Sanchez; Escobet, Teressa

    2015-01-01

    tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

  2. Seismic Hazard Analysis on a Complex, Interconnected Fault Network

    Science.gov (United States)

    Page, M. T.; Field, E. H.; Milner, K. R.

    2017-12-01

    In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.

  3. Active Fault-Tolerant Control for Wind Turbine with Simultaneous Actuator and Sensor Faults

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available The purpose of this paper is to show a novel fault-tolerant tracking control (FTC strategy with robust fault estimation and compensating for simultaneous actuator sensor faults. Based on the framework of fault-tolerant control, developing an FTC design method for wind turbines is a challenge and, thus, they can tolerate simultaneous pitch actuator and pitch sensor faults having bounded first time derivatives. The paper’s key contribution is proposing a descriptor sliding mode method, in which for establishing a novel augmented descriptor system, with which we can estimate the state of system and reconstruct fault by designing descriptor sliding mode observer, the paper introduces an auxiliary descriptor state vector composed by a system state vector, actuator fault vector, and sensor fault vector. By the optimized method of LMI, the conditions for stability that estimated error dynamics are set up to promote the determination of the parameters designed. With this estimation, and designing a fault-tolerant controller, the system’s stability can be maintained. The effectiveness of the design strategy is verified by implementing the controller in the National Renewable Energy Laboratory’s 5-MW nonlinear, high-fidelity wind turbine model (FAST and simulating it in MATLAB/Simulink.

  4. An integrated model for the assessment of unmitigated fault events in ITER's superconducting magnets

    Energy Technology Data Exchange (ETDEWEB)

    McIntosh, S., E-mail: simon.mcintosh@ccfe.ac.uk [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Holmes, A. [Marcham Scientific Ltd., Sarum House, 10 Salisbury Rd., Hungerford RG17 0LH, Berkshire (United Kingdom); Cave-Ayland, K.; Ash, A.; Domptail, F.; Zheng, S.; Surrey, E.; Taylor, N. [Culham Centre for Fusion Energy, Culham Science Center, Abingdon OX14 3DB, Oxfordshire (United Kingdom); Hamada, K.; Mitchell, N. [ITER Organization, Magnet Division, St Paul Lez Durance Cedex (France)

    2016-11-01

    A large amount of energy is stored in ITER superconducting magnet system. Faults which initiate a discharge are typically mitigated to quickly transfer away the stored magnetic energy for dissipation through a bank of resistors. In an extreme unlikely occurrence, an unmitigated fault event represents a potentially severe discharge of energy into the coils and the surrounding structure. A new simulation tool has been developed for the detailed study of these unmitigated fault events. The tool integrates: the propagation of multiple quench fronts initiated by an initial fault or by subsequent coil heating; the 3D convection and conduction of heat through the magnet structure; the 3D conduction of current and Ohmic heating both along the conductor and via alternate pathways generated by arcing or material melt. Arcs linking broken sections of conductor or separate turns are simulated with a new unconstrained arc model to balance electrical current paths and heat generation within the arc column in the multi-physics model. The influence under the high Lorenz forces present is taken into account. Simulation results for an unmitigated fault in a poloidal field coil are presented.

  5. Thermo-Hydro-Micro-Mechanical 3D Modeling of a Fault Gouge During Co-seismic Slip

    Science.gov (United States)

    Papachristos, E.; Stefanou, I.; Sulem, J.; Donze, F. V.

    2017-12-01

    A coupled Thermo-Hydro-Micro-Mechanical (THMM) model based on the Discrete Elements method (DEM) is presented for studying the evolving fault gouge properties during pre- and co-seismic slip. Modeling the behavior of the fault gouge at the microscale is expected to improve our understanding on the various mechanisms that lead to slip weakening and finally control the transition from aseismic to seismic slip.The gouge is considered as a granular material of spherical particles [1]. Upon loading, the interactions between particles follow a frictional behavior and explicit dynamics. Using regular triangulation, a pore network is defined by the physical pore space between the particles. The network is saturated by a compressible fluid, and flow takes place following Stoke's equations. Particles' movement leads to pore deformation and thus to local pore pressure increase. Forces exerted from the fluid onto the particles are calculated using mid-step velocities. The fluid forces are then added to the contact forces resulting from the mechanical interactions before the next step.The same semi-implicit, two way iterative coupling is used for the heat-exchange through conduction.Simple tests have been performed to verify the model against analytical solutions and experimental results. Furthermore, the model was used to study the effect of temperature on the evolution of effective stress in the system and to highlight the role of thermal pressurization during seismic slip [2, 3].The analyses are expected to give grounds for enhancing the current state-of-the-art constitutive models regarding fault friction and shed light on the evolution of fault zone propertiesduring seismic slip.[1] Omid Dorostkar, Robert A Guyer, Paul A Johnson, Chris Marone, and Jan Carmeliet. On the role of fluids in stick-slip dynamics of saturated granular fault gouge using a coupled computational fluid dynamics-discrete element approach. Journal of Geophysical Research: Solid Earth, 122

  6. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    Science.gov (United States)

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  7. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.; Jonsson, Sigurjon; Sudhaus, H.; Baumann, C.

    2012-01-01

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due

  8. Three-Dimensional Growth of Flexural Slip Fault-Bend and Fault-Propagation Folds and Their Geomorphic Expression

    Directory of Open Access Journals (Sweden)

    Asdrúbal Bernal

    2018-03-01

    Full Text Available The three-dimensional growth of fault-related folds is known to be an important process during the development of compressive mountain belts. However, comparatively little is known concerning the manner in which fold growth is expressed in topographic relief and local drainage networks. Here we report results from a coupled kinematic and surface process model of fault-related folding. We consider flexural slip fault-bend and fault-propagation folds that grow in both the transport and strike directions, linked to a surface process model that includes bedrock channel development and hillslope diffusion. We investigate various modes of fold growth under identical surface process conditions and critically analyse their geomorphic expression. Fold growth results in the development of steep forelimbs and gentler, wider backlimbs resulting in asymmetric drainage basin development (smaller basins on forelimbs, larger basins on backlimbs. However, topographies developed above fault-propagation folds are more symmetric than those developed above fault-bend folds as a result of their different forelimb kinematics. In addition, the surface expression of fault-bend and fault-propagation folds depends both on the slip distribution along the fault and on the style of fold growth. When along-strike plunge is a result of slip events with gently decreasing slip towards the fault tips (with or without lateral propagation, large plunge-panel drainage networks are developed at the expense of backpanel (transport-opposing and forepanel (transport-facing drainage basins. In contrast, if the fold grows as a result of slip events with similar displacements along strike, plunge-panel drainage networks are poorly developed (or are transient features of early fold growth and restricted to lateral fold terminations, particularly when the number of propagation events is small. The absence of large-scale plunge-panel drainage networks in natural examples suggests that the

  9. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    Science.gov (United States)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  10. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip.

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  11. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  12. The pulsed migration of hydrocarbons across inactive faults

    Directory of Open Access Journals (Sweden)

    S. D. Harris

    1999-01-01

    Full Text Available Geological fault zones are usually assumed to influence hydrocarbon migration either as high permeability zones which allow enhanced along- or across-fault flow or as barriers to the flow. An additional important migration process inducing along- or across-fault migration can be associated with dynamic pressure gradients. Such pressure gradients can be created by earthquake activity and are suggested here to allow migration along or across inactive faults which 'feel' the quake-related pressure changes; i.e. the migration barriers can be removed on inactive faults when activity takes place on an adjacent fault. In other words, a seal is viewed as a temporary retardation barrier which leaks when a fault related fluid pressure event enhances the buoyancy force and allows the entry pressure to be exceeded. This is in contrast to the usual model where a seal leaks because an increase in hydrocarbon column height raises the buoyancy force above the entry pressure of the fault rock. Under the new model hydrocarbons may migrate across the inactive fault zone for some time period during the earthquake cycle. Numerical models of this process are presented to demonstrate the impact of this mechanism and its role in filling traps bounded by sealed faults.

  13. Fault morphology of the lyo Fault, the Median Tectonic Line Active Fault System

    OpenAIRE

    後藤, 秀昭

    1996-01-01

    In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ~ ...

  14. Statistical fault detection in photovoltaic systems

    KAUST Repository

    Garoudja, Elyes; Harrou, Fouzi; Sun, Ying; Kara, Kamel; Chouder, Aissa; Silvestre, Santiago

    2017-01-01

    and efficiency. Here, an innovative model-based fault-detection approach for early detection of shading of PV modules and faults on the direct current (DC) side of PV systems is proposed. This approach combines the flexibility, and simplicity of a one-diode model

  15. Rupture Complexity Promoted by Damaged Fault Zones in Earthquake Cycle Models

    Science.gov (United States)

    Idini, B.; Ampuero, J. P.

    2017-12-01

    Pulse-like ruptures tend to be more sensitive to stress heterogeneity than crack-like ones. For instance, a stress-barrier can more easily stop the propagation of a pulse than that of a crack. While crack-like ruptures tend to homogenize the stress field within their rupture area, pulse-like ruptures develop heterogeneous stress fields. This feature of pulse-like ruptures can potentially lead to complex seismicity with a wide range of magnitudes akin to the Gutenberg-Richter law. Previous models required a friction law with severe velocity-weakening to develop pulses and complex seismicity. Recent dynamic rupture simulations show that the presence of a damaged zone around a fault can induce pulse-like rupture, even under a simple slip-weakening friction law, although the mechanism depends strongly on initial stress conditions. Here we aim at testing if fault zone damage is a sufficient ingredient to generate complex seismicity. In particular, we investigate the effects of damaged fault zones on the emergence and sustainability of pulse-like ruptures throughout multiple earthquake cycles, regardless of initial conditions. We consider a fault bisecting a homogeneous low-rigidity layer (the damaged zone) embedded in an intact medium. We conduct a series of earthquake cycle simulations to investigate the effects of two fault zone properties: damage level D and thickness H. The simulations are based on classical rate-and-state friction, the quasi-dynamic approximation and the software QDYN (https://github.com/ydluo/qdyn). Selected fully-dynamic simulations are also performed with a spectral element method. Our numerical results show the development of complex rupture patterns in some damaged fault configurations, including events of different sizes, as well as pulse-like, multi-pulse and hybrid pulse-crack ruptures. We further apply elasto-static theory to assess how D and H affect ruptures with constant stress drop, in particular the flatness of their slip profile

  16. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    Science.gov (United States)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  17. Rich Interfaces for Dependability: Compositional Methods for Dynamic Fault Trees and Arcade models

    NARCIS (Netherlands)

    Boudali, H.; Crouzen, Pepijn; Haverkort, Boudewijn R.H.M.; Kuntz, G.W.M.; Stoelinga, Mariëlle Ida Antoinette

    This paper discusses two behavioural interfaces for reliability analysis: dynamic fault trees, which model the system reliability in terms of the reliability of its components and Arcade, which models the system reliability at an architectural level. For both formalisms, the reliability is analyzed

  18. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  19. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    -block’ dilemma is stated for the recent geodynamics of faults in view of interpretations of monitoring results. The matter is that either a block is an active element generating anomalous recent deformation and a fault is a ‘passive’ element, or a fault zone itself is a source of anomalous displacements and blocks are passive elements, i.e. host medium. ‘Paradoxes’ of high and low strain velocities are explainable under the concept that the anomalous recent geodynamics is caused by parametric excitation of deformation processes in fault zones in conditions of a quasi-static regime of loading.Based on empirical data, it is revealed that recent deformation processes migrate in fault zones both in space and time. Two types of waves, ‘inter-fault’ and ‘intra-fault’, are described. A phenomenological model of auto-wave deformation processes is proposed; the model is consistent with monitoring data. A definition of ‘pseudo-wave’ is introduced. Arrangements to establish a system for monitoring deformation auto-waves are described.When applied to geological deformation monitoring, new measurement technologies are associated with result identification problems, including ‘ratios of uncertainty’ such as ‘anomaly’s dimensions – density of monitoring stations’ and ‘anomaly’s duration – details of measurements in time’. It is shown that the RSA interferometry method does not provide for an unambiguous determination of ground surface displacement vectors. 

  20. Testing Pixel Translation Digital Elevation Models to Reconstruct Slip Histories: An Example from the Agua Blanca Fault, Baja California, Mexico

    Science.gov (United States)

    Wilson, J.; Wetmore, P. H.; Malservisi, R.; Ferwerda, B. P.; Teran, O.

    2012-12-01

    We use recently collected slip vector and total offset data from the Agua Blanca fault (ABF) to constrain a pixel translation digital elevation model (DEM) to reconstruct the slip history of this fault. This model was constructed using a Perl script that reads a DEM file (Easting, Northing, Elevation) and a configuration file with coordinates that define the boundary of each fault segment. A pixel translation vector is defined as a magnitude of lateral offset in an azimuthal direction. The program translates pixels north of the fault and prints their pre-faulting position to a new DEM file that can be gridded and displayed. This analysis, where multiple DEMs are created with different translation vectors, allows us to identify areas of transtension or transpression while seeing the topographic expression in these areas. The benefit of this technique, in contrast to a simple block model, is that the DEM gives us a valuable graphic which can be used to pose new research questions. We have found that many topographic features correlate across the fault, i.e. valleys and ridges, which likely have implications for the age of the ABF, long term landscape evolution rates, and potentially provide conformation for total slip assessments The ABF of northern Baja California, Mexico is an active, dextral strike slip fault that transfers Pacific-North American plate boundary strain out of the Gulf of California and around the "Big Bend" of the San Andreas Fault. Total displacement on the ABF in the central and eastern parts of the fault is 10 +/- 2 km based on offset Early-Cretaceous features such as terrane boundaries and intrusive bodies (plutons and dike swarms). Where the fault bifurcates to the west, the northern strand (northern Agua Blanca fault or NABF) is constrained to 7 +/- 1 km. We have not yet identified piercing points on the southern strand, the Santo Tomas fault (STF), but displacement is inferred to be ~4 km assuming that the sum of slip on the NABF and STF is

  1. Data Driven Fault Tolerant Control : A Subspace Approach

    NARCIS (Netherlands)

    Dong, J.

    2009-01-01

    The main stream research on fault detection and fault tolerant control has been focused on model based methods. As far as a model is concerned, changes therein due to faults have to be extracted from measured data. Generally speaking, existing approaches process measured inputs and outputs either by

  2. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    Science.gov (United States)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  3. A comparison between rate-and-state friction and microphysical models, based on numerical simulations of fault slip

    Science.gov (United States)

    van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.

    2018-05-01

    Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.

  4. A 3D resistivity model derived from the transient electromagnetic data observed on the Araba fault, Jordan

    Science.gov (United States)

    Rödder, A.; Tezkan, B.

    2013-01-01

    72 inloop transient electromagnetic soundings were carried out on two 2 km long profiles perpendicular and two 1 km and two 500 m long profiles parallel to the strike direction of the Araba fault in Jordan which is the southern part of the Dead Sea transform fault indicating the boundary between the African and Arabian continental plates. The distance between the stations was on average 50 m. The late time apparent resistivities derived from the induced voltages show clear differences between the stations located at the eastern and at the western part of the Araba fault. The fault appears as a boundary between the resistive western (ca. 100 Ωm) and the conductive eastern part (ca. 10 Ωm) of the survey area. On profiles parallel to the strike late time apparent resistivities were almost constant as well in the time dependence as in lateral extension at different stations, indicating a 2D resistivity structure of the investigated area. After having been processed, the data were interpreted by conventional 1D Occam and Marquardt inversion. The study using 2D synthetic model data showed, however, that 1D inversions of stations close to the fault resulted in fictitious layers in the subsurface thus producing large interpretation errors. Therefore, the data were interpreted by a 2D forward resistivity modeling which was then extended to a 3D resistivity model. This 3D model explains satisfactorily the time dependences of the observed transients at nearly all stations.

  5. An Ensemble Deep Convolutional Neural Network Model with Improved D-S Evidence Fusion for Bearing Fault Diagnosis.

    Science.gov (United States)

    Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang; Hu, Jianjun

    2017-07-28

    Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster-Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions.

  6. A Fault Diagnosis Model Based on LCD-SVD-ANN-MIV and VPMCD for Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Songrong Luo

    2016-01-01

    Full Text Available The fault diagnosis process is essentially a class discrimination problem. However, traditional class discrimination methods such as SVM and ANN fail to capitalize the interactions among the feature variables. Variable predictive model-based class discrimination (VPMCD can adequately use the interactions. But the feature extraction and selection will greatly affect the accuracy and stability of VPMCD classifier. Aiming at the nonstationary characteristics of vibration signal from rotating machinery with local fault, singular value decomposition (SVD technique based local characteristic-scale decomposition (LCD was developed to extract the feature variables. Subsequently, combining artificial neural net (ANN and mean impact value (MIV, ANN-MIV as a kind of feature selection approach was proposed to select more suitable feature variables as input vector of VPMCD classifier. In the end of this paper, a novel fault diagnosis model based on LCD-SVD-ANN-MIV and VPMCD is proposed and proved by an experimental application for roller bearing fault diagnosis. The results show that the proposed method is effective and noise tolerant. And the comparative results demonstrate that the proposed method is superior to the other methods in diagnosis speed, diagnosis success rate, and diagnosis stability.

  7. An Integrated Approach of Model checking and Temporal Fault Tree for System Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2009-10-15

    Digitalization of instruments and control systems in nuclear power plants offers the potential to improve plant safety and reliability through features such as increased hardware reliability and stability, and improved failure detection capability. It however makes the systems and their safety analysis more complex. Originally, safety analysis was applied to hardware system components and formal methods mainly to software. For software-controlled or digitalized systems, it is necessary to integrate both. Fault tree analysis (FTA) which has been one of the most widely used safety analysis technique in nuclear industry suffers from several drawbacks as described in. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA.

  8. Density of oxidation-induced stacking faults in damaged silicon

    NARCIS (Netherlands)

    Kuper, F.G.; Hosson, J.Th.M. De; Verwey, J.F.

    1986-01-01

    A model for the relation between density and length of oxidation-induced stacking faults on damaged silicon surfaces is proposed, based on interactions of stacking faults with dislocations and neighboring stacking faults. The model agrees with experiments.

  9. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...

  10. Radial basis function neural network in fault detection of automotive ...

    African Journals Online (AJOL)

    Radial basis function neural network in fault detection of automotive engines. ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults ... Keywords: Automotive engine, independent RBFNN model, RBF neural network, fault detection

  11. Insights into the 3D architecture of an active caldera ring-fault at Tendürek volcano through modeling of geodetic data

    KAUST Repository

    Vasyura-Bathke, Hannes

    2015-04-28

    The three-dimensional assessment of ring-fault geometries and kinematics at active caldera volcanoes is typically limited by sparse field, geodetic or seismological data, or by only partial ring-fault rupture or slip. Here we use a novel combination of spatially dense InSAR time-series data, numerical models and sand-box experiments to determine the three-dimensional geometry and kinematics of a sub-surface ring-fault at Tendürek volcano in Turkey. The InSAR data reveal that the area within the ring-fault not only subsides, but also shows substantial westward-directed lateral movement. The models and experiments explain this as a consequence of a ‘sliding-trapdoor’ ring-fault architecture that is mostly composed of outward-inclined reverse segments, most markedly so on the volcano\\'s western flanks but includes inward-inclined normal segments on its eastern flanks. Furthermore, the model ring-fault exhibits dextral and sinistral strike-slip components that are roughly bilaterally distributed onto its northern and southern segments, respectively. Our more complex numerical model describes the deformation at Tendürek better than an analytical solution for a single rectangular dislocation in a half-space. Comparison to ring-faults defined at Glen Coe, Fernandina and Bárðarbunga calderas suggests that ‘sliding-trapdoor’ ring-fault geometries may be common in nature and should therefore be considered in geological and geophysical interpretations of ring-faults at different scales worldwide.

  12. Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea

    Science.gov (United States)

    Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham

    2011-01-01

    The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

  13. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  14. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2018-01-01

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a

  15. Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2

    Science.gov (United States)

    2016-06-01

    specification of fault propagation in EMV2 corresponds to the Fault Propagation and Transformation Calculus (FPTC) [Paige 2009]. The following concepts...definition of security includes acci- dental malicious indication of anomalous behavior either from outside a system or by unauthor- ized crossing of a

  16. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  17. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  18. Fault tolerance of the NIF power conditioning system

    International Nuclear Information System (INIS)

    Larson, D.W.; Anderson, R.; Boyes, J.

    1995-01-01

    The tolerance of the circuit topology proposed for the National Ignition Facility (NIF) power conditioning system to specific fault conditions is investigated. A new pulsed power circuit is proposed for the NIF which is simpler and less expensive than previous ICF systems. The inherent fault modes of the new circuit are different from the conventional approach, and must be understood to ensure adequate NIF system reliability. A test-bed which simulates the NIF capacitor module design was constructed to study the circuit design. Measurements from test-bed experiments with induced faults are compared with results from a detailed circuit model. The model is validated by the measurements and used to predict the behavior of the actual NIF module during faults. The model can be used to optimize fault tolerance of the NIF module through an appropriate distribution of circuit inductance and resistance. The experimental and modeling results are presented, and fault performance is compared with the ratings of pulsed power components. Areas are identified which require additional investigation

  19. Toward a Model-Based Approach for Flight System Fault Protection

    Science.gov (United States)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  20. Evidence for chaotic fault interactions in the seismicity of the San Andreas fault and Nankai trough

    Science.gov (United States)

    Huang, Jie; Turcotte, D. L.

    1990-01-01

    The dynamical behavior introduced by fault interactions is examined here using a simple spring-loaded, slider-block model with velocity-weakening friction. The model consists of two slider blocks coupled to each other and to a constant-velocity driver by elastic springs. For an asymmetric system in which the frictional forces on the two blocks are not equal, the solutions exhibit chaotic behavior. The system's behavior over a range of parameter values seems to be generally analogous to that of weakly coupled segments of an active fault. Similarities between the model simulations and observed patterns of seismicity on the south central San Andreas fault in California and in the Nankai trough along the coast of southwestern Japan.

  1. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2015-01-01

    This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data to ...

  2. Mechanical evolution of transpression zones affected by fault interactions: Insights from 3D elasto-plastic finite element models

    Science.gov (United States)

    Nabavi, Seyed Tohid; Alavi, Seyed Ahmad; Mohammadi, Soheil; Ghassemi, Mohammad Reza

    2018-01-01

    The mechanical evolution of transpression zones affected by fault interactions is investigated by a 3D elasto-plastic mechanical model solved with the finite-element method. Ductile transpression between non-rigid walls implies an upward and lateral extrusion. The model results demonstrate that a, transpression zone evolves in a 3D strain field along non-coaxial strain paths. Distributed plastic strain, slip transfer, and maximum plastic strain occur within the transpression zone. Outside the transpression zone, fault slip is reduced because deformation is accommodated by distributed plastic shear. With progressive deformation, the σ3 axis (the minimum compressive stress) rotates within the transpression zone to form an oblique angle to the regional transport direction (∼9°-10°). The magnitude of displacement increases faster within the transpression zone than outside it. Rotation of the displacement vectors of oblique convergence with time suggests that transpression zone evolves toward an overall non-plane strain deformation. The slip decreases along fault segments and with increasing depth. This can be attributed to the accommodation of bulk shortening over adjacent fault segments. The model result shows an almost symmetrical domal uplift due to off-fault deformation, generating a doubly plunging fold and a 'positive flower' structure. Outside the overlap zone, expanding asymmetric basins subside to 'negative flower' structures on both sides of the transpression zone and are called 'transpressional basins'. Deflection at fault segments causes the fault dip fall to less than 90° (∼86-89°) near the surface (∼1.5 km). This results in a pure-shear-dominated, triclinic, and discontinuous heterogeneous flow of the transpression zone.

  3. Experimental testing and modelling of a resistive type superconducting fault current limiter using MgB2 wire

    International Nuclear Information System (INIS)

    Smith, A C; Pei, X; Oliver, A; Husband, M; Rindfleisch, M

    2012-01-01

    A prototype resistive superconducting fault current limiter (SFCL) was developed using single-strand round magnesium diboride (MgB 2 ) wire. The MgB 2 wire was wound with an interleaved arrangement to minimize coil inductance and provide adequate inter-turn voltage withstand capability. The temperature profile from 30 to 40 K and frequency profile from 10 to 100 Hz at 25 K were tested and reported. The quench properties of the prototype coil were tested using a high current test circuit. The fault current was limited by the prototype coil within the first quarter-cycle. The prototype coil demonstrated reliable and repeatable current limiting properties and was able to withstand a potential peak current of 372 A for one second without any degradation of performance. A three-strand SFCL coil was investigated and demonstrated scaled-up current capacity. An analytical model to predict the behaviour of the prototype single-strand SFCL coil was developed using an adiabatic boundary condition on the outer surface of the wire. The predicted fault current using the analytical model showed very good correlation with the experimental test results. The analytical model and a finite element thermal model were used to predict the temperature rise of the wire during a fault. (paper)

  4. Off-fault seismicity suggests creep below 10 km on the northern San Jacinto Fault

    Science.gov (United States)

    Cooke, M. L.; Beyer, J. L.

    2017-12-01

    Within the San Bernardino basin, CA, south of the juncture of the San Jacinto (SJF) and San Andreas faults (SAF), focal mechanisms show normal slip events that are inconsistent with the interseismic strike-slip loading of the region. High-quality (nodal plane uncertainty faults [Anderson et al., 2004]. However, the loading of these normal slip events remains enigmatic because the region is expected to have dextral loading between large earthquake events. These enigmatic normal slip events may be loaded by deep (> 10 km depth) spatially creep along the northern SJF. Steady state models show that over many earthquake cycles, the dextral slip rate on the northern SJF increases southward, placing the San Bernardino basin in extension. In the absence of recent large seismic events that could produce off-fault normal focal mechanisms in the San Bernardino basin, non-uniform deep aseismic slip on the SJF could account for this seismicity. We develop interseismic models that incorporate spatially non-uniform creep below 10 km on the SJF based on steady-state slip distribution. These model results match the pattern of deep normal slip events within the San Bernardino basin. Such deep creep on the SJF may not be detectable from the geodetic signal due to the close proximity of the SAF, whose lack of seismicity suggests that it is locked to 20 km. Interseismic models with 15 km locking depth on both faults are indistinguishable from models with 10 km locking depth on the SJF and 20 km locking depth on the SAF. This analysis suggests that the microseismicity in our multi-decadal catalog may record both the interseismic dextral loading of the region as well as off-fault deformation associated with deep aseismic creep on the northern SJF. If the enigmatic normal slip events of the San Bernardino basin are included in stress inversions from the seismic catalog used to assess seismic hazard, the results may provide inaccurate information about fault loading in this region.

  5. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  6. Bond Graph Modelling for Fault Detection and Isolation of an Ultrasonic Linear Motor

    Directory of Open Access Journals (Sweden)

    Mabrouk KHEMLICHE

    2010-12-01

    Full Text Available In this paper Bond Graph modeling, simulation and monitoring of ultrasonic linear motors are presented. Only the vibration of piezoelectric ceramics and stator will be taken into account. Contact problems between stator and rotor are not treated here. So, standing and travelling waves will be briefly presented since the majority of the motors use another wave type to generate the stator vibration and thus obtain the elliptic trajectory of the points on the surface of the stator in the first time. Then, electric equivalent circuit will be presented with the aim for giving a general idea of another way of graphical modelling of the vibrator introduced and developed. The simulations of an ultrasonic linear motor are then performed and experimental results on a prototype built at the laboratory are presented. Finally, validation of the Bond Graph method for modelling is carried out, comparing both simulation and experiment results. This paper describes the application of the FDI approach to an electrical system. We demonstrate the FDI effectiveness with real data collected from our automotive test. We introduce the analysis of the problem involved in the faults localization in this process. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new approaches to the complex system control.

  7. Multi-Fault Rupture Scenarios in the Brawley Seismic Zone

    Science.gov (United States)

    Kyriakopoulos, C.; Oglesby, D. D.; Rockwell, T. K.; Meltzner, A. J.; Barall, M.

    2017-12-01

    Dynamic rupture complexity is strongly affected by both the geometric configuration of a network of faults and pre-stress conditions. Between those two, the geometric configuration is more likely to be anticipated prior to an event. An important factor in the unpredictability of the final rupture pattern of a group of faults is the time-dependent interaction between them. Dynamic rupture models provide a means to investigate this otherwise inscrutable processes. The Brawley Seismic Zone in Southern California is an area in which this approach might be important for inferring potential earthquake sizes and rupture patterns. Dynamic modeling can illuminate how the main faults in this area, the Southern San Andreas (SSAF) and Imperial faults, might interact with the intersecting cross faults, and how the cross faults may modulate rupture on the main faults. We perform 3D finite element modeling of potential earthquakes in this zone assuming an extended array of faults (Figure). Our results include a wide range of ruptures and fault behaviors depending on assumptions about nucleation location, geometric setup, pre-stress conditions, and locking depth. For example, in the majority of our models the cross faults do not strongly participate in the rupture process, giving the impression that they are not typically an aid or an obstacle to the rupture propagation. However, in some cases, particularly when rupture proceeds slowly on the main faults, the cross faults indeed can participate with significant slip, and can even cause rupture termination on one of the main faults. Furthermore, in a complex network of faults we should not preclude the possibility of a large event nucleating on a smaller fault (e.g. a cross fault) and eventually promoting rupture on the main structure. Recent examples include the 2010 Mw 7.1 Darfield (New Zealand) and Mw 7.2 El Mayor-Cucapah (Mexico) earthquakes, where rupture started on a smaller adjacent segment and later cascaded into a larger

  8. Modeling earthquake sequences along the Manila subduction zone: Effects of three-dimensional fault geometry

    Science.gov (United States)

    Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan

    2018-05-01

    To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.

  9. Sensor Fault Diagnosis Observer for an Electric Vehicle Modeled as a Takagi-Sugeno System

    Directory of Open Access Journals (Sweden)

    S. Gómez-Peñate

    2018-01-01

    Full Text Available A sensor fault diagnosis of an electric vehicle (EV modeled as a Takagi-Sugeno (TS system is proposed. The proposed TS model considers the nonlinearity of the longitudinal velocity of the vehicle and parametric variation induced by the slope of the road; these considerations allow to obtain a mathematical model that represents the vehicle for a wide range of speeds and different terrain conditions. First, a virtual sensor represented by a TS state observer is developed. Sufficient conditions are given by a set of linear matrix inequalities (LMIs that guarantee asymptotic convergence of the TS observer. Second, the work is extended to perform fault detection and isolation based on a generalized observer scheme (GOS. Numerical simulations are presented to show the performance and applicability of the proposed method.

  10. Landslide susceptibility mapping for a part of North Anatolian Fault Zone (Northeast Turkey) using logistic regression model

    Science.gov (United States)

    Demir, Gökhan; aytekin, mustafa; banu ikizler, sabriye; angın, zekai

    2013-04-01

    The North Anatolian Fault is know as one of the most active and destructive fault zone which produced many earthquakes with high magnitudes. Along this fault zone, the morphology and the lithological features are prone to landsliding. However, many earthquake induced landslides were recorded by several studies along this fault zone, and these landslides caused both injuiries and live losts. Therefore, a detailed landslide susceptibility assessment for this area is indispancable. In this context, a landslide susceptibility assessment for the 1445 km2 area in the Kelkit River valley a part of North Anatolian Fault zone (Eastern Black Sea region of Turkey) was intended with this study, and the results of this study are summarized here. For this purpose, geographical information system (GIS) and a bivariate statistical model were used. Initially, Landslide inventory maps are prepared by using landslide data determined by field surveys and landslide data taken from General Directorate of Mineral Research and Exploration. The landslide conditioning factors are considered to be lithology, slope gradient, slope aspect, topographical elevation, distance to streams, distance to roads and distance to faults, drainage density and fault density. ArcGIS package was used to manipulate and analyze all the collected data Logistic regression method was applied to create a landslide susceptibility map. Landslide susceptibility maps were divided into five susceptibility regions such as very low, low, moderate, high and very high. The result of the analysis was verified using the inventoried landslide locations and compared with the produced probability model. For this purpose, Area Under Curvature (AUC) approach was applied, and a AUC value was obtained. Based on this AUC value, the obtained landslide susceptibility map was concluded as satisfactory. Keywords: North Anatolian Fault Zone, Landslide susceptibility map, Geographical Information Systems, Logistic Regression Analysis.

  11. FAULT TOLERANCE IN MOBILE GRID COMPUTING

    OpenAIRE

    Aghila Rajagopal; M.A. Maluk Mohamed

    2014-01-01

    This paper proposes a novel model for Surrogate Object based paradigm in mobile grid environment for achieving a Fault Tolerance. Basically Mobile Grid Computing Model focuses on Service Composition and Resource Sharing Process. In order to increase the performance of the system, Fault Recovery plays a vital role. In our Proposed System for Recovery point, Surrogate Object Based Checkpoint Recovery Model is introduced. This Checkpoint Recovery model depends on the Surrogate Object and the Fau...

  12. Computer aided construction of fault tree

    International Nuclear Information System (INIS)

    Kovacs, Z.

    1982-01-01

    Computer code CAT for the automatic construction of the fault tree is briefly described. Code CAT makes possible simple modelling of components using decision tables, it accelerates the fault tree construction process, constructs fault trees of different complexity, and is capable of harmonized co-operation with programs PREPandKITT 1,2 for fault tree analysis. The efficiency of program CAT and thus the accuracy and completeness of fault trees constructed significantly depends on the compilation and sophistication of decision tables. Currently, program CAT is used in co-operation with programs PREPandKITT 1,2 in reliability analyses of nuclear power plant systems. (B.S.)

  13. Seismic variability of subduction thrust faults: Insights from laboratory models

    Science.gov (United States)

    Corbi, F.; Funiciello, F.; Faccenna, C.; Ranalli, G.; Heuret, A.

    2011-06-01

    Laboratory models are realized to investigate the role of interface roughness, driving rate, and pressure on friction dynamics. The setup consists of a gelatin block driven at constant velocity over sand paper. The interface roughness is quantified in terms of amplitude and wavelength of protrusions, jointly expressed by a reference roughness parameter obtained by their product. Frictional behavior shows a systematic dependence on system parameters. Both stick slip and stable sliding occur, depending on driving rate and interface roughness. Stress drop and frequency of slip episodes vary directly and inversely, respectively, with the reference roughness parameter, reflecting the fundamental role for the amplitude of protrusions. An increase in pressure tends to favor stick slip. Static friction is a steeply decreasing function of the reference roughness parameter. The velocity strengthening/weakening parameter in the state- and rate-dependent dynamic friction law becomes negative for specific values of the reference roughness parameter which are intermediate with respect to the explored range. Despite the simplifications of the adopted setup, which does not address the problem of off-fault fracturing, a comparison of the experimental results with the depth distribution of seismic energy release along subduction thrust faults leads to the hypothesis that their behavior is primarily controlled by the depth- and time-dependent distribution of protrusions. A rough subduction fault at shallow depths, unable to produce significant seismicity because of low lithostatic pressure, evolves into a moderately rough, velocity-weakening fault at intermediate depths. The magnitude of events in this range is calibrated by the interplay between surface roughness and subduction rate. At larger depths, the roughness further decreases and stable sliding becomes gradually more predominant. Thus, although interplate seismicity is ultimately controlled by tectonic parameters (velocity of

  14. LQCD workflow execution framework: Models, provenance and fault-tolerance

    International Nuclear Information System (INIS)

    Piccoli, Luciano; Simone, James N; Kowalkowlski, James B; Dubey, Abhishek

    2010-01-01

    Large computing clusters used for scientific processing suffer from systemic failures when operated over long continuous periods for executing workflows. Diagnosing job problems and faults leading to eventual failures in this complex environment is difficult, specifically when the success of an entire workflow might be affected by a single job failure. In this paper, we introduce a model-based, hierarchical, reliable execution framework that encompass workflow specification, data provenance, execution tracking and online monitoring of each workflow task, also referred to as participants. The sequence of participants is described in an abstract parameterized view, which is translated into a concrete data dependency based sequence of participants with defined arguments. As participants belonging to a workflow are mapped onto machines and executed, periodic and on-demand monitoring of vital health parameters on allocated nodes is enabled according to pre-specified rules. These rules specify conditions that must be true pre-execution, during execution and post-execution. Monitoring information for each participant is propagated upwards through the reflex and healing architecture, which consists of a hierarchical network of decentralized fault management entities, called reflex engines. They are instantiated as state machines or timed automatons that change state and initiate reflexive mitigation action(s) upon occurrence of certain faults. We describe how this cluster reliability framework is combined with the workflow execution framework using formal rules and actions specified within a structure of first order predicate logic that enables a dynamic management design that reduces manual administrative workload, and increases cluster-productivity.

  15. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  16. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  17. A dynamic integrated fault diagnosis method for power transformers.

    Science.gov (United States)

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  18. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    Science.gov (United States)

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  19. Process plant alarm diagnosis using synthesised fault tree knowledge

    International Nuclear Information System (INIS)

    Trenchard, A.J.

    1990-01-01

    The development of computer based tools, to assist process plant operators in their task of fault/alarm diagnosis, has received much attention over the last twenty five years. More recently, with the emergence of Artificial Intelligence (AI) technology, the research activity in this subject area has heightened. As a result, there are a great variety of fault diagnosis methodologies, using many different approaches to represent the fault propagation behaviour of process plant. These range in complexity from steady state quantitative models to more abstract definitions of the relationships between process alarms. Unfortunately, very few of the techniques have been tried and tested on process plant and even fewer have been judged to be commercial successes. One of the outstanding problems still remains the time and effort required to understand and model the fault propagation behaviour of each considered process. This thesis describes the development of an experimental knowledge based system (KBS) to diagnose process plant faults, as indicated by process variable alarms. In an attempt to minimise the modelling effort, the KBS has been designed to infer diagnoses using a fault tree representation of the process behaviour, generated using an existing fault tree synthesis package (FAULTFINDER). The process is described to FAULTFINDER as a configuration of unit models, derived from a standard model library or by tailoring existing models. The resultant alarm diagnosis methodology appears to work well for hard (non-rectifying) faults, but is likely to be less robust when attempting to diagnose intermittent faults and transient behaviour. The synthesised fault trees were found to contain the bulk of the information required for the diagnostic task, however, this needed to be augmented with extra information in certain circumstances. (author)

  20. Coulomb Stress Accumulation along the San Andreas Fault System

    Science.gov (United States)

    Smith, Bridget; Sandwell, David

    2003-01-01

    Stress accumulation rates along the primary segments of the San Andreas Fault system are computed using a three-dimensional (3-D) elastic half-space model with realistic fault geometry. The model is developed in the Fourier domain by solving for the response of an elastic half-space due to a point vector body force and analytically integrating the force from a locking depth to infinite depth. This approach is then applied to the San Andreas Fault system using published slip rates along 18 major fault strands of the fault zone. GPS-derived horizontal velocity measurements spanning the entire 1700 x 200 km region are then used to solve for apparent locking depth along each primary fault segment. This simple model fits remarkably well (2.43 mm/yr RMS misfit), although some discrepancies occur in the Eastern California Shear Zone. The model also predicts vertical uplift and subsidence rates that are in agreement with independent geologic and geodetic estimates. In addition, shear and normal stresses along the major fault strands are used to compute Coulomb stress accumulation rate. As a result, we find earthquake recurrence intervals along the San Andreas Fault system to be inversely proportional to Coulomb stress accumulation rate, in agreement with typical coseismic stress drops of 1 - 10 MPa. This 3-D deformation model can ultimately be extended to include both time-dependent forcing and viscoelastic response.

  1. EKF-based fault detection for guided missiles flight control system

    Science.gov (United States)

    Feng, Gang; Yang, Zhiyong; Liu, Yongjin

    2017-03-01

    The guided missiles flight control system is essential for guidance accuracy and kill probability. It is complicated and fragile. Since actuator faults and sensor faults could seriously affect the security and reliability of the system, fault detection for missiles flight control system is of great significance. This paper deals with the problem of fault detection for the closed-loop nonlinear model of the guided missiles flight control system in the presence of disturbance. First, set up the fault model of flight control system, and then design the residual generation based on the extended Kalman filter (EKF) for the Eulerian-discrete fault model. After that, the Chi-square test was selected for the residual evaluation and the fault detention task for guided missiles closed-loop system was accomplished. Finally, simulation results are provided to illustrate the effectiveness of the approach proposed in the case of elevator fault separately.

  2. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    International Nuclear Information System (INIS)

    Cumbest, R.J.

    2000-01-01

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

  3. Estimation of the statistical distribution of faulting in selected areas and the design of an exploration model to detect these faults. Final research report

    International Nuclear Information System (INIS)

    Brooke, J.P.

    1977-11-01

    Selected sites in the United States have been analyzed geomathematically as a part of the technical support program to develop site suitability criteria for High Level Nuclear Waste (HLW) repositories. Using published geological maps and other information, statistical evaluations of the fault patterns and other significant geological features have been completed for 16 selected localities. The observed frequency patterns were compared to theoretical patterns in order to obtain a predictive model for faults at each location. In general, the patterns approximate an exponential distribution function with the exception of Edinburgh, Scotland--the control area. The fault pattern of rocks at Edinburgh closely approximate a negative binominal frequency distribution. The range of fault occurrences encountered during the investigation varied from a low of 0.15 to a high of 10 faults per square mile. Faulting is only one factor in the overall geological evaluation of HLW sites. A general exploration program plan to aid in investigating HLW respository sites has been completed using standard mineral exploration techniques. For the preliminary examination of the suitability of potential sites, present economic conditions indicate the scanning and reconnaissance exploration stages will cost approximately $1,000,000. These would proceed in a logical sequence so that the site selected optimizes the geological factors. The reconnaissance stage of mineral exploration normally utilizes ''saturation geophysics'' to obtain complete geological information. This approach is recommended in the preliminary HLW site investigation process as the most economical and rewarding. Exploration games have been designed for potential sites in the eastern and the western U.S. The game matrix approach is recommended as a suitable technique for the allocation of resources in a search problem during this preliminary phase

  4. Fault-Tree Modeling of Safety-Critical Network Communication in a Digitalized Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hun; Kang, Hyun Gook [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve technical self-reliance for nuclear I and C systems in Korea, the Advanced Power Reactor 1400 (APR-1400) man-machine interface system (MMIS) architecture was developed by the Korea Atomic Energy Research Institute (KAERI). As one of the systems in the developed MMIS architecture, the Engineered Safety Feature-Component Control System (ESF-CCS) employs a network communication system for the transmission of safety-critical information from group controllers (GCs) to loop controllers (LCs) to effectively accommodate the vast number of field controllers. The developed fault-tree model was then applied to several case studies. As an example of the development of a fault-tree model for ESF-CCS signal failure, the fault-tree model of ESF-CCS signal failure for CS pump PP01A in the CSAS condition was designed by considering the identified hazardous states of network failure that would result in a failure to provide input signals to the corresponding LC. The quantitative results for four case studies demonstrated that the probability of overall network communication failure, which was calculated as the sum of the failure probability associated with each failure cause, contributes up to 1.88% of the probability of ESF-CCS signal failure for the CS pump considered in the case studies.

  5. The Hanford Site's Gable Mountain structure: A comparison of the recurrence of design earthquakes based on fault slip rates and a probabilistic exposure model

    International Nuclear Information System (INIS)

    Rohay, A.C.

    1991-01-01

    Gable Mountain is a segment of the Umtanum Ridge-Gable Mountain structural trend, an east-west trending series of anticlines, one of the major geologic structures on the Hanford Site. A probabilistic seismic exposure model indicates that Gable Mountain and two adjacent segments contribute significantly to the seismic hazard at the Hanford Site. Geologic measurements of the uplift of initially horizontal (11-12 Ma) basalt flows indicate that a broad, continuous, primary anticline grew at an average rate of 0.009-0.011 mm/a, and narrow, segmented, secondary anticlines grew at rates of 0.009 mm/a at Gable Butte and 0.018 mm/a at Gable Mountain. The buried Southeast Anticline appears to have a different geometry, consisting of a single, intermediate-width anticline with an estimated growth rate of 0.007 mm/a. The recurrence rate and maximum magnitude of earthquakes for the fault models were used to estimate the fault slip rate for each of the fault models and to determine the implied structural growth rate of the segments. The current model for Gable Mountain-Gable Butte predicts 0.004 mm/a of vertical uplift due to primary faulting and 0.008 mm/a due to secondary faulting. These rates are roughly half the structurally estimated rates for Gable Mountain, but the model does not account for the smaller secondary fold at Gable Butte. The model predicted an uplift rate for the Southeast Anticline of 0.006 mm/a, caused by the low open-quotes fault capabilityclose quotes weighting rather than a different fault geometry. The effects of previous modifications to the fault models are examined and potential future modifications are suggested. For example, the earthquake recurrence relationship used in the current exposure model has a b-value of 1.15, compared to a previous value of 0.85. This increases the implied deformation rates due to secondary fault models, and therefore supports the use of this regionally determined b-value to this fault/fold system

  6. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  7. Observer-Based and Regression Model-Based Detection of Emerging Faults in Coal Mills

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Lin, Bao; Jørgensen, Sten Bay

    2006-01-01

    In order to improve the reliability of power plants it is important to detect fault as fast as possible. Doing this it is interesting to find the most efficient method. Since modeling of large scale systems is time consuming it is interesting to compare a model-based method with data driven ones....

  8. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  9. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    Science.gov (United States)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  10. Fault Severity Evaluation and Improvement Design for Mechanical Systems Using the Fault Injection Technique and Gini Concordance Measure

    Directory of Open Access Journals (Sweden)

    Jianing Wu

    2014-01-01

    Full Text Available A new fault injection and Gini concordance based method has been developed for fault severity analysis for multibody mechanical systems concerning their dynamic properties. The fault tree analysis (FTA is employed to roughly identify the faults needed to be considered. According to constitution of the mechanical system, the dynamic properties can be achieved by solving the equations that include many types of faults which are injected by using the fault injection technique. Then, the Gini concordance is used to measure the correspondence between the performance with faults and under normal operation thereby providing useful hints of severity ranking in subsystems for reliability design. One numerical example and a series of experiments are provided to illustrate the application of the new method. The results indicate that the proposed method can accurately model the faults and receive the correct information of fault severity. Some strategies are also proposed for reliability improvement of the spacecraft solar array.

  11. Fault slip and earthquake recurrence along strike-slip faults — Contributions of high-resolution geomorphic data

    KAUST Repository

    Zielke, Olaf

    2015-01-01

    Understanding earthquake (EQ) recurrence relies on information about the timing and size of past EQ ruptures along a given fault. Knowledge of a fault\\'s rupture history provides valuable information on its potential future behavior, enabling seismic hazard estimates and loss mitigation. Stratigraphic and geomorphic evidence of faulting is used to constrain the recurrence of surface rupturing EQs. Analysis of the latter data sets culminated during the mid-1980s in the formulation of now classical EQ recurrence models, now routinely used to assess seismic hazard. Within the last decade, Light Detection and Ranging (lidar) surveying technology and other high-resolution data sets became increasingly available to tectono-geomorphic studies, promising to contribute to better-informed models of EQ recurrence and slip-accumulation patterns. After reviewing motivation and background, we outline requirements to successfully reconstruct a fault\\'s offset accumulation pattern from geomorphic evidence. We address sources of uncertainty affecting offset measurement and advocate approaches to minimize them. A number of recent studies focus on single-EQ slip distributions and along-fault slip accumulation patterns. We put them in context with paleoseismic studies along the respective faults by comparing coefficients of variation CV for EQ inter-event time and slip-per-event and find that a) single-event offsets vary over a wide range of length-scales and the sources for offset variability differ with length-scale, b) at fault-segment length-scales, single-event offsets are essentially constant, c) along-fault offset accumulation as resolved in the geomorphic record is dominated by essentially same-size, large offset increments, and d) there is generally no one-to-one correlation between the offset accumulation pattern constrained in the geomorphic record and EQ occurrence as identified in the stratigraphic record, revealing the higher resolution and preservation potential of

  12. A rate-state model for aftershocks triggered by dislocation on a rectangular fault: a review and new insights

    Directory of Open Access Journals (Sweden)

    F. Catalli

    2006-06-01

    Full Text Available We compute the static displacement, stress, strain and the Coulomb failure stress produced in an elastic medium by a finite size rectangular fault after its dislocation with uniform stress drop but a non uniform dislocation on the source. The time-dependent rate of triggered earthquakes is estimated by a rate-state model applied to a uniformly distributed population of faults whose equilibrium is perturbated by a stress change caused only by the first dislocation. The rate of triggered events in our simulations is exponentially proportional to the shear stress change, but the time at which the maximum rate begins to decrease is variable from fractions of hour for positive stress changes of the order of some MPa, up to more than a year for smaller stress changes. As a consequence, the final number of triggered events is proportional to the shear stress change. The model predicts that the total number of events triggered on a plane containing the fault is proportional to the 2/3 power of the seismic moment. Indeed, the total number of aftershocks produced on the fault plane scales in magnitude, M, as 10M. Including the negative contribution of the stress drop inside the source, we observe that the number of events inhibited on the fault is, at long term, nearly identical to the number of those induced outside, representing a sort of conservative natural rule. Considering its behavior in time, our model does not completely match the popular Omori law; in fact it has been shown that the seismicity induced closely to the fault edges is intense but of short duration, while that expected at large distances (up to some tens times the fault dimensions exhibits a much slower decay.

  13. Identification of active fault using analysis of derivatives with vertical second based on gravity anomaly data (Case study: Seulimeum fault in Sumatera fault system)

    Science.gov (United States)

    Hududillah, Teuku Hafid; Simanjuntak, Andrean V. H.; Husni, Muhammad

    2017-07-01

    Gravity is a non-destructive geophysical technique that has numerous application in engineering and environmental field like locating a fault zone. The purpose of this study is to spot the Seulimeum fault system in Iejue, Aceh Besar (Indonesia) by using a gravity technique and correlate the result with geologic map and conjointly to grasp a trend pattern of fault system. An estimation of subsurface geological structure of Seulimeum fault has been done by using gravity field anomaly data. Gravity anomaly data which used in this study is from Topex that is processed up to Free Air Correction. The step in the Next data processing is applying Bouger correction and Terrin Correction to obtain complete Bouger anomaly that is topographically dependent. Subsurface modeling is done using the Gav2DC for windows software. The result showed a low residual gravity value at a north half compared to south a part of study space that indicated a pattern of fault zone. Gravity residual was successfully correlate with the geologic map that show the existence of the Seulimeum fault in this study space. The study of earthquake records can be used for differentiating the active and non active fault elements, this gives an indication that the delineated fault elements are active.

  14. Fault Tree Analysis with Temporal Gates and Model Checking Technique for Qualitative System Safety Analysis

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2010-01-01

    Fault tree analysis (FTA) has suffered from several drawbacks such that it uses only static gates and hence can not capture dynamic behaviors of the complex system precisely, and it is in lack of rigorous semantics, and reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and time-consuming for the complex systems while it has been one of the most widely used safety analysis technique in nuclear industry. Although several attempts have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA

  15. Fault tolerant control schemes using integral sliding modes

    CERN Document Server

    Hamayun, Mirza Tariq; Alwi, Halim

    2016-01-01

    The key attribute of a Fault Tolerant Control (FTC) system is its ability to maintain overall system stability and acceptable performance in the face of faults and failures within the feedback system. In this book Integral Sliding Mode (ISM) Control Allocation (CA) schemes for FTC are described, which have the potential to maintain close to nominal fault-free performance (for the entire system response), in the face of actuator faults and even complete failures of certain actuators. Broadly an ISM controller based around a model of the plant with the aim of creating a nonlinear fault tolerant feedback controller whose closed-loop performance is established during the design process. The second approach involves retro-fitting an ISM scheme to an existing feedback controller to introduce fault tolerance. This may be advantageous from an industrial perspective, because fault tolerance can be introduced without changing the existing control loops. A high fidelity benchmark model of a large transport aircraft is u...

  16. Stability of faults with heterogeneous friction properties and effective normal stress

    Science.gov (United States)

    Luo, Yingdi; Ampuero, Jean-Paul

    2018-05-01

    Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework

  17. Synthetic seismicity for the San Andreas fault

    Directory of Open Access Journals (Sweden)

    S. N. Ward

    1994-06-01

    Full Text Available Because historical catalogs generally span only a few repetition intervals of major earthquakes, they do not provide much constraint on how regularly earthquakes recur. In order to obtain better recurrence statistics and long-term probability estimates for events M ? 6 on the San Andreas fault, we apply a seismicity model to this fault. The model is based on the concept of fault segmentation and the physics of static dislocations which allow for stress transfer between segments. Constraints are provided by geological and seismological observations of segment lengths, characteristic magnitudes and long-term slip rates. Segment parameters slightly modified from the Working Group on California Earthquake Probabilities allow us to reproduce observed seismicity over four orders of magnitude. The model yields quite irregular earthquake recurrence patterns. Only the largest events (M ? 7.5 are quasi-periodic; small events cluster. Both the average recurrence time and the aperiodicity are also a function of position along the fault. The model results are consistent with paleoseismic data for the San Andreas fault as well as a global set of historical and paleoseismic recurrence data. Thus irregular earthquake recurrence resulting from segment interaction is consistent with a large range of observations.

  18. Thermodynamic modeling of the stacking fault energy of austenitic steels

    International Nuclear Information System (INIS)

    Curtze, S.; Kuokkala, V.-T.; Oikari, A.; Talonen, J.; Haenninen, H.

    2011-01-01

    The stacking fault energies (SFE) of 10 austenitic steels were determined in the temperature range 50 ≤ T ≤ 600 K by thermodynamic modeling of the Fe-Cr-Ni-Mn-Al-Si-Cu-C-N system using a modified Olson and Cohen modeling approach (Olson GB, Cohen M. Metall Trans 1976;7A:1897 ). The applied model accounts for each element's contribution to the Gibbs energy, the first-order excess free energies, magnetic contributions and the effect of interstitial nitrogen. Experimental SFE values from X-ray diffraction measurements were used for comparison. The effect of SFE on deformation mechanisms was also studied by electron backscatter diffraction.

  19. Design of a fault diagnosis system for next generation nuclear power plants

    International Nuclear Information System (INIS)

    Zhao, K.; Upadhyaya, B.R.; Wood, R.T.

    2004-01-01

    A new design approach for fault diagnosis is developed for next generation nuclear power plants. In the nuclear reactor design phase, data reconciliation is used as an efficient tool to determine the measurement requirements to achieve the specified goal of fault diagnosis. In the reactor operation phase, the plant measurements are collected to estimate uncertain model parameters so that a high fidelity model can be obtained for fault diagnosis. The proposed algorithm of fault detection and isolation is able to combine the strength of first principle model based fault diagnosis and the historical data based fault diagnosis. Principal component analysis on the reconciled data is used to develop a statistical model for fault detection. The updating of the principal component model based on the most recent reconciled data is a locally linearized model around the current plant measurements, so that it is applicable to any generic nonlinear systems. The sensor fault diagnosis and process fault diagnosis are decoupled through considering the process fault diagnosis as a parameter estimation problem. The developed approach has been applied to the IRIS helical coil steam generator system to monitor the operational performance of individual steam generators. This approach is general enough to design fault diagnosis systems for the next generation nuclear power plants. (authors)

  20. Micromechanics and statistics of slipping events in a granular seismic fault model

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, L de [Department of Information Engineering and CNISM, Second University of Naples, Aversa (Italy); Ciamarra, M Pica [CNR-SPIN, Dipartimento di Scienze Fisiche, Universita di Napoli Federico II (Italy); Lippiello, E; Godano, C, E-mail: dearcangelis@na.infn.it [Department of Environmental Sciences and CNISM, Second University of Naples, Caserta (Italy)

    2011-09-15

    The stick-slip is investigated in a seismic fault model made of a confined granular system under shear stress via three dimensional Molecular Dynamics simulations. We study the statistics of slipping events and, in particular, the dependence of the distribution on model parameters. The distribution consistently exhibits two regimes: an initial power law and a bump at large slips. The initial power law decay is in agreement with the the Gutenberg-Richter law characterizing real seismic occurrence. The exponent of the initial regime is quite independent of model parameters and its value is in agreement with experimental results. Conversely, the position of the bump is solely controlled by the ratio of the drive elastic constant and the system size. Large slips also become less probable in absence of fault gouge and tend to disappear for stiff drives. A two-time force-force correlation function, and a susceptibility related to the system response to pressure changes, characterize the micromechanics of slipping events. The correlation function unveils the micromechanical changes occurring both during microslips and slips. The mechanical susceptibility encodes the magnitude of the incoming microslip. Numerical results for the cellular-automaton version of the spring block model confirm the parameter dependence observed for size distribution in the granular model.

  1. Distributed bearing fault diagnosis based on vibration analysis

    Science.gov (United States)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  2. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    Science.gov (United States)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    2D fault model, where we investigate different feedback mechanisms and their effect on seismicity evolution. We introduce an approach to estimate the state of a fault and thus its capability of generating a large (system-wide) event assuming likely heterogeneous distributions of hypocenters and stresses, respectively.

  3. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    Science.gov (United States)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  4. A systematic fault tree analysis based on multi-level flow modeling

    International Nuclear Information System (INIS)

    Gofuku, Akio; Ohara, Ai

    2010-01-01

    The fault tree analysis (FTA) is widely applied for the safety evaluation of a large-scale and mission-critical system. Because the potential of the FTA, however, strongly depends on human skill of analyzers, problems are pointed out in (1) education and training, (2) unreliable quality, (3) necessity of expertise knowledge, and (4) update of FTA results after the reconstruction of a target system. To get rid of these problems, many techniques to systematize FTA activities by applying computer technologies have been proposed. However, these techniques only use structural information of a target system and do not use functional information that is one of important properties of an artifact. The principle of FTA is to trace comprehensively cause-effect relations from a top undesirable effect to anomaly causes. The tracing is similar to the causality estimation technique that the authors proposed to find plausible counter actions to prevent or to mitigate the undesirable behavior of plants based on the model by a functional modeling technique, Multilevel Flow Modeling (MFM). The authors have extended this systematic technique to construct a fault tree (FT). This paper presents an algorithm of systematic construction of FT based on MFM models and demonstrates the applicability of the extended technique by the FT construction result of a cooling plant of nitric acid. (author)

  5. Fault locator of an allyl chloride plant

    Directory of Open Access Journals (Sweden)

    Savković-Stevanović Jelenka B.

    2004-01-01

    Full Text Available Process safety analysis, which includes qualitative fault event identification, the relative frequency and event probability functions, as well as consequence analysis, was performed on an allye chloride plant. An event tree for fault diagnosis and cognitive reliability analysis, as well as a troubleshooting system, were developed. Fuzzy inductive reasoning illustrated the advantages compared to crisp inductive reasoning. A qualitative model forecast the future behavior of the system in the case of accident detection and then compared it with the actual measured data. A cognitive model including qualitative and quantitative information by fuzzy logic of the incident scenario was derived as a fault locator for an ally! chloride plant. The obtained results showed the successful application of cognitive dispersion modeling to process safety analysis. A fuzzy inductive reasoner illustrated good performance to discriminate between different types of malfunctions. This fault locator allowed risk analysis and the construction of a fault tolerant system. This study is the first report in the literature showing the cognitive reliability analysis method.

  6. Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System

    Science.gov (United States)

    Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.

    2006-01-01

    The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

  7. The constant failure rate model for fault tree evaluation as a tool for unit protection reliability assessment

    International Nuclear Information System (INIS)

    Vichev, S.; Bogdanov, D.

    2000-01-01

    The purpose of this paper is to introduce the fault tree analysis method as a tool for unit protection reliability estimation. The constant failure rate model applies for making reliability assessment, and especially availability assessment. For that purpose an example for unit primary equipment structure and fault tree example for simplified unit protection system is presented (author)

  8. Active fault tolerance control of a wind turbine system using an unknown input observer with an actuator fault

    Directory of Open Access Journals (Sweden)

    Li Shanzhi

    2018-03-01

    Full Text Available This paper proposes a fault tolerant control scheme based on an unknown input observer for a wind turbine system subject to an actuator fault and disturbance. Firstly, an unknown input observer for state estimation and fault detection using a linear parameter varying model is developed. By solving linear matrix inequalities (LMIs and linear matrix equalities (LMEs, the gains of the unknown input observer are obtained. The convergence of the unknown input observer is also analysed with Lyapunov theory. Secondly, using fault estimation, an active fault tolerant controller is applied to a wind turbine system. Finally, a simulation of a wind turbine benchmark with an actuator fault is tested for the proposed method. The simulation results indicate that the proposed FTC scheme is efficient.

  9. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  10. Solving fault diagnosis problems linear synthesis techniques

    CERN Document Server

    Varga, Andreas

    2017-01-01

    This book addresses fault detection and isolation topics from a computational perspective. Unlike most existing literature, it bridges the gap between the existing well-developed theoretical results and the realm of reliable computational synthesis procedures. The model-based approach to fault detection and diagnosis has been the subject of ongoing research for the past few decades. While the theoretical aspects of fault diagnosis on the basis of linear models are well understood, most of the computational methods proposed for the synthesis of fault detection and isolation filters are not satisfactory from a numerical standpoint. Several features make this book unique in the fault detection literature: Solution of standard synthesis problems in the most general setting, for both continuous- and discrete-time systems, regardless of whether they are proper or not; consequently, the proposed synthesis procedures can solve a specific problem whenever a solution exists Emphasis on the best numerical algorithms to ...

  11. Fault diagnostics of dynamic system operation using a fault tree based method

    International Nuclear Information System (INIS)

    Hurdle, E.E.; Bartlett, L.M.; Andrews, J.D.

    2009-01-01

    For conventional systems, their availability can be considerably improved by reducing the time taken to restore the system to the working state when faults occur. Fault identification can be a significant proportion of the time taken in the repair process. Having diagnosed the problem the restoration of the system back to its fully functioning condition can then take place. This paper expands the capability of previous approaches to fault detection and identification using fault trees for application to dynamically changing systems. The technique has two phases. The first phase is modelling and preparation carried out offline. This gathers information on the effects that sub-system failure will have on the system performance. Causes of the sub-system failures are developed in the form of fault trees. The second phase is application. Sensors are installed on the system to provide information about current system performance from which the potential causes can be deduced. A simple system example is used to demonstrate the features of the method. To illustrate the potential for the method to deal with additional system complexity and redundancy, a section from an aircraft fuel system is used. A discussion of the results is provided.

  12. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  13. Investigating the ancient landscape and Cenozoic drainage development of southern Yukon (Canada), through restoration modeling of the Cordilleran-scale Tintina Fault.

    Science.gov (United States)

    Hayward, N.; Jackson, L. E.; Ryan, J. J.

    2017-12-01

    This study of southern Yukon (Canada) challenges the notion that the landscape in the long-lived, tectonically active, northern Canadian Cordillera is implicitly young. The impact of Cenozoic displacement along the continental- scale Tintina Fault on the development of the Yukon River and drainage basins of central Yukon is investigated through geophysical and hydrological modeling of digital terrain model data. Regional geological evidence suggests that the age of the planation of the Yukon plateaus is at least Late Cretaceous, rather than Neogene as previously concluded, and that there has been little penetrative deformation or net incision in the region since the late Mesozoic. The Tintina Fault has been interpreted as having experienced 430 km of dextral displacement, primarily during the Eocene. However, the alignment of river channels across the fault at specific displacements, coupled with recent seismic events and related fault activity, indicate that the fault may have moved in stages over a longer time span. Topographic restoration and hydrological models show that the drainage of the Yukon River northwestward into Alaska via the ancestral Kwikhpak River was only possible at restored displacements of up to 50-55 km on the Tintina Fault. We interpret the published drainage reversals convincingly attributed to the effects of Pliocene glaciation as an overprint on earlier Yukon River reversals or diversions attributed to tectonic displacements along the Tintina Fault. At restored fault displacements of between 230 and 430 km, our models illustrate that paleo Yukon River drainage conceivably may have flowed eastward into the Atlantic Ocean via an ancestral Liard River, which was a tributary of the paleo Bell River system. The revised drainage evolution if correct requires wide-reaching reconsideration of surficial geology deposits, the flow direction and channel geometries of the region's ancient rivers, and importantly, exploration strategies of placer gold

  14. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    Data.gov (United States)

    National Aeronautics and Space Administration — Sensor faults continue to be a major hurdle for sys- tems health management to reach its full potential. At the same time, few recorded instances of sensor faults...

  15. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    Science.gov (United States)

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  16. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT for Aquaculture

    Directory of Open Access Journals (Sweden)

    Yingyi Chen

    2017-01-01

    Full Text Available In the Internet of Things (IoT equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  17. Asperity-Type Potential Foreshock Sources Driven by Nucleation-Induced Creep within a Rate-and-State Fault Model

    Science.gov (United States)

    Higgins, N.; Lapusta, N.

    2016-12-01

    What physical mechanism drives the occurrence of foreshocks? Many studies have suggested that slow slip from the mainshock nucleation is a necessary ingredient for explaining foreshock observations. We explore this view, investigating asperity-type foreshock sources driven by nucleation-induced creep using rate-and-state fault models, and numerically simulatie their behavior over many rupture cycles. Inspired by the unique laboratory experiments of earthquake nucleation and rupture conducted on a meter-scale slab of granite by McLaskey and colleagues, we model potential foreshock sources as "bumps" on the fault interface by assigning a significantly higher normal compression and, in some cases, increased smoothness (lower characteristic slip) over small patches within a seismogenic fault. In order to study the mechanics of isolated patch-induced seismic events preceding the mainshock, we separate these patches sufficiently in space. The simulation results show that our rate-and-state fault model with patches of locally different properties driven by the slow nucleation of the mainshock is indeed able to produce isolated microseismicity before the mainshock. Remarkably, the stress drops of these precursory events are compatible with observations and approximately independent of the patch compression, despite the wide range of the elevated patch compression used in different simulations. We find that this unexpected property of stress drops for this type of model is due to two factors. Firstly, failure of stronger patches results in rupture further into the surrounding fault, keeping the average stress drop down. Secondly, patches close to their local nucleation size relieve a significant amount of stress via aseismic pre-slip, which also helps to keep the stress drop down. Our current work is directed towards investigating the seismic signature of such events and the potential differences with other types of microseismicity.

  18. Fault detection and fault tolerant control of a smart base isolation system with magneto-rheological damper

    International Nuclear Information System (INIS)

    Wang, Han; Song, Gangbing

    2011-01-01

    Fault detection and isolation (FDI) in real-time systems can provide early warnings for faulty sensors and actuator signals to prevent events that lead to catastrophic failures. The main objective of this paper is to develop FDI and fault tolerant control techniques for base isolation systems with magneto-rheological (MR) dampers. Thus, this paper presents a fixed-order FDI filter design procedure based on linear matrix inequalities (LMI). The necessary and sufficient conditions for the existence of a solution for detecting and isolating faults using the H ∞ formulation is provided in the proposed filter design. Furthermore, an FDI-filter-based fuzzy fault tolerant controller (FFTC) for a base isolation structure model was designed to preserve the pre-specified performance of the system in the presence of various unknown faults. Simulation and experimental results demonstrated that the designed filter can successfully detect and isolate faults from displacement sensors and accelerometers while maintaining excellent performance of the base isolation technology under faulty conditions

  19. Model-Based Sensor Placement for Component Condition Monitoring and Fault Diagnosis in Fossil Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mobed, Parham [Texas Tech Univ., Lubbock, TX (United States); Pednekar, Pratik [West Virginia Univ., Morgantown, WV (United States); Bhattacharyya, Debangsu [West Virginia Univ., Morgantown, WV (United States); Turton, Richard [West Virginia Univ., Morgantown, WV (United States); Rengaswamy, Raghunathan [Texas Tech Univ., Lubbock, TX (United States)

    2016-01-29

    Design and operation of energy producing, near “zero-emission” coal plants has become a national imperative. This report on model-based sensor placement describes a transformative two-tier approach to identify the optimum placement, number, and type of sensors for condition monitoring and fault diagnosis in fossil energy system operations. The algorithms are tested on a high fidelity model of the integrated gasification combined cycle (IGCC) plant. For a condition monitoring network, whether equipment should be considered at a unit level or a systems level depends upon the criticality of the process equipment, its likeliness to fail, and the level of resolution desired for any specific failure. Because of the presence of a high fidelity model at the unit level, a sensor network can be designed to monitor the spatial profile of the states and estimate fault severity levels. In an IGCC plant, besides the gasifier, the sour water gas shift (WGS) reactor plays an important role. In view of this, condition monitoring of the sour WGS reactor is considered at the unit level, while a detailed plant-wide model of gasification island, including sour WGS reactor and the Selexol process, is considered for fault diagnosis at the system-level. Finally, the developed algorithms unify the two levels and identifies an optimal sensor network that maximizes the effectiveness of the overall system-level fault diagnosis and component-level condition monitoring. This work could have a major impact on the design and operation of future fossil energy plants, particularly at the grassroots level where the sensor network is yet to be identified. In addition, the same algorithms developed in this report can be further enhanced to be used in retrofits, where the objectives could be upgrade (addition of more sensors) and relocation of existing sensors.

  20. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  1. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    Science.gov (United States)

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  2. Mesoscale models for stacking faults, deformation twins and martensitic transformations: Linking atomistics to continuum

    Science.gov (United States)

    Kibey, Sandeep A.

    We present a hierarchical approach that spans multiple length scales to describe defect formation---in particular, formation of stacking faults (SFs) and deformation twins---in fcc crystals. We link the energy pathways (calculated here via ab initio density functional theory, DFT) associated with formation of stacking faults and twins to corresponding heterogeneous defect nucleation models (described through mesoscale dislocation mechanics). Through the generalized Peieirls-Nabarro model, we first correlate the width of intrinsic SFs in fcc alloy systems to their nucleation pathways called generalized stacking fault energies (GSFE). We then establish a qualitative dependence of twinning tendency in fee metals and alloys---specifically, in pure Cu and dilute Cu-xAl (x= 5.0 and 8.3 at.%)---on their twin-energy pathways called the generalized planar fault energies (GPFE). We also link the twinning behavior of Cu-Al alloys to their electronic structure by determining the effect of solute Al on the valence charge density redistribution at the SF through ab initio DFT. Further, while several efforts have been undertaken to incorporate twinning for predicting stress-strain response of fcc materials, a fundamental law for critical twinning stress has not yet emerged. We resolve this long-standing issue by linking quantitatively the twin-energy pathways (GPFE) obtained via ab initio DFT to heterogeneous, dislocation-based twin nucleation models. We establish an analytical expression that quantitatively predicts the critical twinning stress in fcc metals in agreement with experiments without requiring any empiricism at any length scale. Our theory connects twinning stress to twin-energy pathways and predicts a monotonic relation between stress and unstable twin stacking fault energy revealing the physics of twinning. We further demonstrate that the theory holds for fcc alloys as well. Our theory inherently accounts for directional nature of twinning which available

  3. A Weighted Deep Representation Learning Model for Imbalanced Fault Diagnosis in Cyber-Physical Systems

    Science.gov (United States)

    Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang

    2018-01-01

    Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods. PMID:29621131

  4. A Weighted Deep Representation Learning Model for Imbalanced Fault Diagnosis in Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Zhenyu Wu

    2018-04-01

    Full Text Available Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM. However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods.

  5. Fault Detection and Load Distribution for the Wind Farm Challenge

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Larsen, Jesper Abildgaard; Stoustrup, Jakob

    2014-01-01

    In this paper a fault detection system and a fault tolerant controller for a wind farm model is designed and tested. The wind farm model is taken from the wind farm challenge which is a public available challenge where a wind farm consisting of nine turbines is proposed. The goal of the challenge...... normal and faulty conditions. Thus a fault detection system and a fault tolerant controller has been designed and combined. The fault tolerant control system has then been tested and compared to the reference system and shows improvement on all measures....

  6. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip☆

    Science.gov (United States)

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-01-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290

  7. Rigorously modeling self-stabilizing fault-tolerant circuits: An ultra-robust clocking scheme for systems-on-chip.

    Science.gov (United States)

    Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph

    2014-06-01

    We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.

  8. Model-based fault detection for generator cooling system in wind turbines using SCADA data

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Kinnaert, Michel

    2016-01-01

    In this work, an early fault detection system for the generator cooling of wind turbines is presented and tested. It relies on a hybrid model of the cooling system. The parameters of the generator model are estimated by an extended Kalman filter. The estimated parameters are then processed by an ...

  9. Estimation of Faults in DC Electrical Power System

    Science.gov (United States)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  10. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    Directory of Open Access Journals (Sweden)

    Kaijuan Yuan

    2016-01-01

    Full Text Available Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  11. Detection and Identification of Loss of Efficiency Faults of Flight Actuators

    Directory of Open Access Journals (Sweden)

    Ossmann Daniel

    2015-03-01

    Full Text Available We propose linear parameter-varying (LPV model-based approaches to the synthesis of robust fault detection and diagnosis (FDD systems for loss of efficiency (LOE faults of flight actuators. The proposed methods are applicable to several types of parametric (or multiplicative LOE faults such as actuator disconnection, surface damage, actuator power loss or stall loads. For the detection of these parametric faults, advanced LPV-model detection techniques are proposed, which implicitly provide fault identification information. Fast detection of intermittent stall loads (seen as nuisances, rather than faults is important in enhancing the performance of various fault detection schemes dealing with large input signals. For this case, a dedicated fast identification algorithm is devised. The developed FDD systems are tested on a nonlinear actuator model which is implemented in a full nonlinear aircraft simulation model. This enables the validation of the FDD system’s detection and identification characteristics under realistic conditions.

  12. How do horizontal, frictional discontinuities affect reverse fault-propagation folding?

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-09-01

    The development of new reverse faults and related folds is strongly controlled by the mechanical characteristics of the host rocks. In this study we analyze the impact of a specific kind of anisotropy, i.e. thin mechanical and frictional discontinuities, in affecting the development of reverse faults and of the associated folds using physical scaled models. We perform analog modeling introducing one or two initially horizontal, thin discontinuities above an initially blind fault dipping at 30° in one case, and 45° in another, and then compare the results with those obtained from a fully isotropic model. The experimental results show that the occurrence of thin discontinuities affects both the development and the propagation of new faults and the shape of the associated folds. New faults 1) accelerate or decelerate their propagation depending on the location of the tips with respect to the discontinuities, 2) cross the discontinuities at a characteristic angle (∼90°), and 3) produce folds with different shapes, resulting not only from the dip of the new faults but also from their non-linear propagation history. Our results may have direct impact on future kinematic models, especially those aimed to reconstruct the tectonic history of faults that developed in layered rocks or in regions affected by pre-existing faults.

  13. Performance of grid connected DFIG during recurring symmetrical faults using Internal Model Controller based Enhanced Field Oriented Control

    Directory of Open Access Journals (Sweden)

    D.V.N.Ananth

    2016-06-01

    Full Text Available The modern grid rules forces DFIG to withstand and operate during single as well as multiple low voltage grid faults. The system must not lose synchronism during any type of fault for a given time period. This withstanding capacity is called low voltage ride through (LVRT. To improve performance during LVRT, enhanced field oriented control (EFOC method is adopted in rotor side converter. This method helps in improving power transfer capability during steady state and better dynamic and transient stability during abnormal conditions. In this technique, rotor flux reference change from synchronous speed to some smaller speed or zero during the fault for injecting current at the rotor slip frequency. In this process, DC-Offset component of flux is controlled beyond decomposing to a lower value during faults and maintaining it. This offset decomposition of flux will be oscillatory in conventional FOC, whereas in EFOC with internal model controller, flux can damp quickly not only for single fault but during multiple faults. This strategy can regulate stator and rotor current waveform to sinusoidal without distortion during and after fault. It has better damped torque oscillations, control in rotor speed and generator flux during and after fault. The fluctuations in DC bus voltage across capacitor are also controlled using proposed EFOC technique. The system performance with under-voltage grid fault of 30% and 60% of the rated voltage occurring at the point of common coupling during 1 to 1.25 and another fault between 1.6 to 1.85 seconds are analyzed using simulation studies.

  14. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  15. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  16. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  17. Predictive fault-tolerant control of an all-thruster satellite in 6-DOF motion via neural network model updating

    Science.gov (United States)

    Tavakoli, M. M.; Assadian, N.

    2018-03-01

    The problem of controlling an all-thruster spacecraft in the coupled translational-rotational motion in presence of actuators fault and/or failure is investigated in this paper. The nonlinear model predictive control approach is used because of its ability to predict the future behavior of the system. The fault/failure of the thrusters changes the mapping between the commanded forces to the thrusters and actual force/torque generated by the thruster system. Thus, the basic six degree-of-freedom kinetic equations are separated from this mapping and a set of neural networks are trained off-line to learn the kinetic equations. Then, two neural networks are attached to these trained networks in order to learn the thruster commands to force/torque mappings on-line. Different off-nominal conditions are modeled so that neural networks can detect any failure and fault, including scale factor and misalignment of thrusters. A simple model of the spacecraft relative motion is used in MPC to decrease the computational burden. However, a precise model by the means of orbit propagation including different types of perturbation is utilized to evaluate the usefulness of the proposed approach in actual conditions. The numerical simulation shows that this method can successfully control the all-thruster spacecraft with ON-OFF thrusters in different combinations of thruster fault and/or failure.

  18. Integrating cyber attacks within fault trees

    International Nuclear Information System (INIS)

    Nai Fovino, Igor; Masera, Marcelo; De Cian, Alessio

    2009-01-01

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  19. Integrating cyber attacks within fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Nai Fovino, Igor [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy)], E-mail: igor.nai@jrc.it; Masera, Marcelo [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy); De Cian, Alessio [Department of Electrical Engineering, University di Genova, Genoa (Italy)

    2009-09-15

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  20. Unknown input observer based detection of sensor faults in a wind turbine

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2010-01-01

    In this paper an unknown input observer is designed to detect three different sensor fault scenarios in a specified bench mark model for fault detection and accommodation of wind turbines. In this paper a subset of faults is dealt with, it are faults in the rotor and generator speed sensors as well...... as a converter sensor fault. The proposed scheme detects the speed sensor faults in question within the specified requirements given in the bench mark model, while the converter fault is detected but not within the required time to detect....

  1. Fault Detection and Isolation and Fault Tolerant Control of Wind Turbines Using Set-Valued Observers

    DEFF Research Database (Denmark)

    Casau, Pedro; Rosa, Paulo Andre Nobre; Tabatabaeipour, Seyed Mojtaba

    2012-01-01

    Research on wind turbine Operations & Maintenance (O&M) procedures is critical to the expansion of Wind Energy Conversion systems (WEC). In order to reduce O&M costs and increase the lifespan of the turbine, we study the application of Set-Valued Observers (SVO) to the problem of Fault Detection...... and Isolation (FDI) and Fault Tolerant Control (FTC) of wind turbines, by taking advantage of the recent advances in SVO theory for model invalidation. A simple wind turbine model is presented along with possible faulty scenarios. The FDI algorithm is built on top of the described model, taking into account...

  2. Design and Verification of Fault-Tolerant Components

    DEFF Research Database (Denmark)

    Zhang, Miaomiao; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    We present a systematic approach to design and verification of fault-tolerant components with real-time properties as found in embedded systems. A state machine model of the correct component is augmented with internal transitions that represent hypothesized faults. Also, constraints...... to model and check this design. Model checking uses concrete parameters, so we extend the result with parametric analysis using abstractions of the automata in a rigorous verification....... relatively detailed such that they can serve directly as blueprints for engineering, and yet be amenable to exhaustive verication. The approach is illustrated with a design of a triple modular fault-tolerant system that is a real case we received from our collaborators in the aerospace field. We use UPPAAL...

  3. Modelling the Small Throw Fault Effect on the Stability of a Mining Roadway and Its Verification by In Situ Investigation

    Directory of Open Access Journals (Sweden)

    Małkowski Piotr

    2017-12-01

    Full Text Available The small throw fault zones cause serious problems for mining engineers. The knowledge about the range of fractured zone around the roadway and about roadway’s contour deformations helps a lot with the right support design or its reinforcement. The paper presents the results of numerical analysis of the effect of a small throw fault zone on the convergence of the mining roadway and the extent of the fracturing induced around the roadway. The computations were performed on a dozen physical models featuring various parameters of rock mass and support for the purpose to select the settings that reflects most suitably the behavior of tectonically disturbed and undisturbed rocks around the roadway. Finally, the results of the calculations were verified by comparing them with in situ convergence measurements carried out in the maingate D-2 in the “Borynia-Zofiówka-Jastrzębie” coal mine. Based on the results of measurements it may be concluded that the rock mass displacements around a roadway section within a fault zone during a year were four times in average greater than in the section tectonically unaffected. The results of numerical calculations show that extent of the yielding zone in the roof reaches two times the throw of the fault, in the floor 3 times the throw, and horizontally approx. 1.5 to 1.8 times the width of modelled fault zone. Only a few elasto-plastic models or models with joints between the rock beds can be recommended for predicting the performance of a roadway which is within a fault zone. It is possible, using these models, to design the roadway support of sufficient load bearing capacity at the tectonically disturbed section.

  4. Fault geometry, rupture dynamics and ground motion from potential earthquakes on the North Anatolian Fault under the Sea of Marmara

    KAUST Repository

    Oglesby, David D.

    2012-03-01

    Using the 3-D finite-element method, we develop dynamic spontaneous rupture models of earthquakes on the North Anatolian Fault system in the Sea of Marmara, Turkey, considering the geometrical complexity of the fault system in this region. We find that the earthquake size, rupture propagation pattern and ground motion all strongly depend on the interplay between the initial (static) regional pre-stress field and the dynamic stress field radiated by the propagating rupture. By testing several nucleation locations, we observe that those far from an oblique normal fault stepover segment (near Istanbul) lead to large through-going rupture on the entire fault system, whereas nucleation locations closer to the stepover segment tend to produce ruptures that die out in the stepover. However, this pattern can change drastically with only a 10° rotation of the regional stress field. Our simulations also reveal that while dynamic unclamping near fault bends can produce a new mode of supershear rupture propagation, this unclamping has a much smaller effect on the speed of the peak in slip velocity along the fault. Finally, we find that the complex fault geometry leads to a very complex and asymmetric pattern of near-fault ground motion, including greatly amplified ground motion on the insides of fault bends. The ground-motion pattern can change significantly with different hypocentres, even beyond the typical effects of directivity. The results of this study may have implications for seismic hazard in this region, for the dynamics and ground motion of geometrically complex faults, and for the interpretation of kinematic inverse rupture models.

  5. Fault geometry, rupture dynamics and ground motion from potential earthquakes on the North Anatolian Fault under the Sea of Marmara

    KAUST Repository

    Oglesby, David D.; Mai, Paul Martin

    2012-01-01

    Using the 3-D finite-element method, we develop dynamic spontaneous rupture models of earthquakes on the North Anatolian Fault system in the Sea of Marmara, Turkey, considering the geometrical complexity of the fault system in this region. We find that the earthquake size, rupture propagation pattern and ground motion all strongly depend on the interplay between the initial (static) regional pre-stress field and the dynamic stress field radiated by the propagating rupture. By testing several nucleation locations, we observe that those far from an oblique normal fault stepover segment (near Istanbul) lead to large through-going rupture on the entire fault system, whereas nucleation locations closer to the stepover segment tend to produce ruptures that die out in the stepover. However, this pattern can change drastically with only a 10° rotation of the regional stress field. Our simulations also reveal that while dynamic unclamping near fault bends can produce a new mode of supershear rupture propagation, this unclamping has a much smaller effect on the speed of the peak in slip velocity along the fault. Finally, we find that the complex fault geometry leads to a very complex and asymmetric pattern of near-fault ground motion, including greatly amplified ground motion on the insides of fault bends. The ground-motion pattern can change significantly with different hypocentres, even beyond the typical effects of directivity. The results of this study may have implications for seismic hazard in this region, for the dynamics and ground motion of geometrically complex faults, and for the interpretation of kinematic inverse rupture models.

  6. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  7. A dependability modeling of software under hardware faults digitized system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, Jong Gyun

    1996-02-01

    An analytic approach to the dependability evaluation of software in the operational phase is suggested in this work with special attention to the physical fault effects on the software dependability : The physical faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory with the path decomposition micro model. The model represents an application software with a graph consisting of nodes and arcs that probabilistic ally determine the flow from node to node. Through proper transformation of nodes and arcs, the graph can be reduced to a simple two-node graph and the software failure probability is derived from this graph. This model can be extended to the software system which consists of several complete modules without modification. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system(ILS system) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system (ILS system) is the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. This modeling method is particularly attractive for medium size programs such as software used in digitized systems of

  8. Effects of deglaciation on the crustal stress field and implications for endglacial faulting: A parametric study of simple Earth and ice models

    International Nuclear Information System (INIS)

    Lund, Bjoern

    2005-03-01

    The large faults of northern Scandinavia, hundreds of kilometres long and with offsets of more than 10 m, are inferred to be the result of major earthquakes triggered by the retreating ice sheet some 9,000 years ago. In this report we have studied a number of parameters involved in quantitative modelling of glacial isostatic adjustment (GIA) in order to illustrate how they affect stress, displacement and fault stability during deglaciation. Using a variety of reference models, we have verified that our modelling approach, a finite element analysis scheme with proper adjustments for the requirements of GIA modelling, performs satisfactory. The size of the model and the density of the grid have been investigated in order to be able to perform high resolution modelling in reasonable time. This report includes studies of both the ice and earth models. We have seen that the steeper the ice edge is, the more concentrated is the deformation around the edge and consequently shear stress localizes with high magnitudes around the ice edge. The temporal evolution of height and basal extent of the ice is very important for the response of the earth model, and we have shown that the last stages of ice retreat can cause fault instability over a large lateral region. The effect on shear stress and vertical displacement by variations in Earth model parameters such as stiffness, viscosity, density, compressibility and layer thickness was investigated. More complicated geometries, such as multiple layers and lateral layer thickness variations, were also studied. We generally find that these variations have more effect on the shear stress distributions than on the vertical displacement distributions. We also note that shear stress magnitude is affected more than the spatial shape of the shear stress distribution. Fault stability during glaciation/deglaciation was investigated by two different variations on the Mohr-Coulomb failure criterion. The stability of a fault in a stress field

  9. Effects of deglaciation on the crustal stress field and implications for endglacial faulting: A parametric study of simple Earth and ice models

    Energy Technology Data Exchange (ETDEWEB)

    Lund, Bjoern [Uppsala Univ. (Sweden). Dept. of Earth Sciences

    2005-03-01

    The large faults of northern Scandinavia, hundreds of kilometres long and with offsets of more than 10 m, are inferred to be the result of major earthquakes triggered by the retreating ice sheet some 9,000 years ago. In this report we have studied a number of parameters involved in quantitative modelling of glacial isostatic adjustment (GIA) in order to illustrate how they affect stress, displacement and fault stability during deglaciation. Using a variety of reference models, we have verified that our modelling approach, a finite element analysis scheme with proper adjustments for the requirements of GIA modelling, performs satisfactory. The size of the model and the density of the grid have been investigated in order to be able to perform high resolution modelling in reasonable time. This report includes studies of both the ice and earth models. We have seen that the steeper the ice edge is, the more concentrated is the deformation around the edge and consequently shear stress localizes with high magnitudes around the ice edge. The temporal evolution of height and basal extent of the ice is very important for the response of the earth model, and we have shown that the last stages of ice retreat can cause fault instability over a large lateral region. The effect on shear stress and vertical displacement by variations in Earth model parameters such as stiffness, viscosity, density, compressibility and layer thickness was investigated. More complicated geometries, such as multiple layers and lateral layer thickness variations, were also studied. We generally find that these variations have more effect on the shear stress distributions than on the vertical displacement distributions. We also note that shear stress magnitude is affected more than the spatial shape of the shear stress distribution. Fault stability during glaciation/deglaciation was investigated by two different variations on the Mohr-Coulomb failure criterion. The stability of a fault in a stress field

  10. Diesel Engine Actuator Fault Isolation using Multiple Models Hypothesis Tests

    DEFF Research Database (Denmark)

    Bøgh, S.A.

    1994-01-01

    Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic......Detection of current faults in a D.C. motor with unknown load torques is not feasible with linear methods and threshold logic...

  11. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    Directory of Open Access Journals (Sweden)

    Junjie Ren

    2013-01-01

    Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  12. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    Science.gov (United States)

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  13. Model-Based Off-Nominal State Isolation and Detection System for Autonomous Fault Management, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed model-based Fault Management system addresses the need for cost-effective solutions that enable higher levels of onboard spacecraft autonomy to reliably...

  14. Model-Based Off-Nominal State Isolation and Detection System for Autonomous Fault Management, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed model-based Fault Management system addresses the need for cost-effective solutions that enable higher levels of onboard spacecraft autonomy to reliably...

  15. Heterogeneity in the Fault Damage Zone: a Field Study on the Borrego Fault, B.C., Mexico

    Science.gov (United States)

    Ostermeijer, G.; Mitchell, T. M.; Dorsey, M. T.; Browning, J.; Rockwell, T. K.; Aben, F. M.; Fletcher, J. M.; Brantut, N.

    2017-12-01

    understanding the evolution of fault damage, it's feedback into the seismic cycle, and impact on fluid migration in fault zones. The dataset from the Borrego Fault offers a unique opportunity to study the distribution of fault damage in-situ, and provide field observations towards improving fault zone models.

  16. Modeling of fault reactivation and induced seismicity during hydraulic fracturing of shale-gas reservoirs

    Science.gov (United States)

    We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned toward conditions usually encountered in the Marce...

  17. Fault Injection and Monitoring Capability for a Fault-Tolerant Distributed Computation System

    Science.gov (United States)

    Torres-Pomales, Wilfredo; Yates, Amy M.; Malekpour, Mahyar R.

    2010-01-01

    The Configurable Fault-Injection and Monitoring System (CFIMS) is intended for the experimental characterization of effects caused by a variety of adverse conditions on a distributed computation system running flight control applications. A product of research collaboration between NASA Langley Research Center and Old Dominion University, the CFIMS is the main research tool for generating actual fault response data with which to develop and validate analytical performance models and design methodologies for the mitigation of fault effects in distributed flight control systems. Rather than a fixed design solution, the CFIMS is a flexible system that enables the systematic exploration of the problem space and can be adapted to meet the evolving needs of the research. The CFIMS has the capabilities of system-under-test (SUT) functional stimulus generation, fault injection and state monitoring, all of which are supported by a configuration capability for setting up the system as desired for a particular experiment. This report summarizes the work accomplished so far in the development of the CFIMS concept and documents the first design realization.

  18. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  19. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    Science.gov (United States)

    Bird, P.

    2010-12-01

    [Bird, 2009, JGR] to model neotectonics of the active fault network in the western United States found that only 2/3 of Pacific-North America relative motion in California occurs by slip on faults included in seismic hazard models by the 2007 Working Group on California Earthquake Probabilities [2008; USGS OFR 2007-1437]. (Whether the missing distributed permanent deformation is seismogenic has not yet been determined.) 5. Even outside of broad orogens, dangerous intraplate faulting is evident in catalogs: (a) About 3% of shallow earthquakes in the Global CMT catalog are Intraplate [Bird et al., 2010, SRL]; (b) Intraplate earthquakes have higher stress-drops by about a factor-of-two [Kanamori & Anderson, 1975, BSSA; Allmann & Shearer, 2009, JGR]; (c) The corner magnitude of intraplate earthquakes is >7.6, and unconstrained from above, on the moment magnitude scale [Bird & Kagan, 2004, BSSA]. For some intraplate earthquakes, the causitive fault is mapped only (if at all) by its aftershocks.

  20. Fault Mechanics and Post-seismic Deformation at Bam, SE Iran

    Science.gov (United States)

    Wimpenny, S. E.; Copley, A.

    2017-12-01

    The extent to which aseismic deformation relaxes co-seismic stress changes on a fault zone is fundamental to assessing the future seismic hazard following any earthquake, and in understanding the mechanical behaviour of faults. We used models of stress-driven afterslip and visco-elastic relaxation, in conjunction with a dense time series of post-seismic InSAR measurements, to show that there has been minimal release of co-seismic stress changes through post-seismic deformation following the 2003 Mw 6.6 Bam earthquake. Our modelling indicates that the faults at Bam may remain predominantly locked, and that the co- plus inter-seismically accumulated elastic strain stored down-dip of the 2003 rupture patch may be released in a future Mw 6 earthquake. Modelling also suggests parts of the fault that experienced post-seismic creep between 2003-2009 overlapped with areas that also slipped co-seismically. Our observations and models also provide an opportunity to probe how aseismic fault slip leads to the growth of topography at Bam. We find that, for our modelled afterslip distribution to be consistent with forming the sharp step in the local topography at Bam over repeated earthquake cycles, and also to be consistent with the geodetic observations, requires either (1) far-field tectonic loading equivalent to a 2-10 MPa deviatoric stress acting across the fault system, which suggests it supports stresses 60-100 times less than classical views of static fault strength, or (2) that the fault surface has some form of mechanical anisotropy, potentially related to corrugations on the fault plane, that controls the sense of slip.

  1. Post Fire Safe Shutdown Analysis Using a Fault Tree Logic Model

    International Nuclear Information System (INIS)

    Yim, Hyun Tae; Park, Jun Hyun

    2005-01-01

    Every nuclear power plant should have its own fire hazard analysis including the fire safe shutdown analysis. A safe shutdown (SSD) analysis is performed to demonstrate the capability of the plant to safely shut down for a fire in any given area. The basic assumption is that there will be fire damage to all cables and equipment located within a common fire area. When evaluating the SSD capabilities of the plant, based on a review of the systems, equipment and cables within each fire area, it should be determined which shutdown paths are either unaffected or least impacted by a postulated fire within the fire area. Instead of seeking a success path for safe shutdown given all cables and equipment damaged by a fire, there can be an alternative approach to determine the SSD capability: fault tree analysis. This paper introduces the methodology for fire SSD analysis using a fault tree logic model

  2. PV Systems Reliability Final Technical Report: Ground Fault Detection

    Energy Technology Data Exchange (ETDEWEB)

    Lavrova, Olga [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flicker, Jack David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  3. User's manual of a computer code for seismic hazard evaluation for assessing the threat to a facility by fault model. SHEAT-FM

    International Nuclear Information System (INIS)

    Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide

    2005-09-01

    To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)

  4. Seismotectonics and fault structure of the California Central Coast

    Science.gov (United States)

    Hardebeck, Jeanne L.

    2010-01-01

    I present and interpret new earthquake relocations and focal mechanisms for the California Central Coast. The relocations improve upon catalog locations by using 3D seismic velocity models to account for lateral variations in structure and by using relative arrival times from waveform cross-correlation and double-difference methods to image seismicity features more sharply. Focal mechanisms are computed using ray tracing in the 3D velocity models. Seismicity alignments on the Hosgri fault confirm that it is vertical down to at least 12 km depth, and the focal mechanisms are consistent with right-lateral strike-slip motion on a vertical fault. A prominent, newly observed feature is an ~25 km long linear trend of seismicity running just offshore and parallel to the coastline in the region of Point Buchon, informally named the Shoreline fault. This seismicity trend is accompanied by a linear magnetic anomaly, and both the seismicity and the magnetic anomaly end where they obliquely meet the Hosgri fault. Focal mechanisms indicate that the Shoreline fault is a vertical strike-slip fault. Several seismicity lineations with vertical strike-slip mechanisms are observed in Estero Bay. Events greater than about 10 km depth in Estero Bay, however, exhibit reverse-faulting mechanisms, perhaps reflecting slip at the top of the remnant subducted slab. Strike-slip mechanisms are observed offshore along the Hosgri–San Simeon fault system and onshore along the West Huasna and Rinconada faults, while reverse mechanisms are generally confined to the region between these two systems. This suggests a model in which the reverse faulting is primarily due to restraining left-transfer of right-lateral slip.

  5. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  6. Fault Detection, Isolation, and Accommodation for LTI Systems Based on GIMC Structure

    Directory of Open Access Journals (Sweden)

    D. U. Campos-Delgado

    2008-01-01

    Full Text Available In this contribution, an active fault-tolerant scheme that achieves fault detection, isolation, and accommodation is developed for LTI systems. Faults and perturbations are considered as additive signals that modify the state or output equations. The accommodation scheme is based on the generalized internal model control architecture recently proposed for fault-tolerant control. In order to improve the performance after a fault, the compensation is considered in two steps according with a fault detection and isolation algorithm. After a fault scenario is detected, a general fault compensator is activated. Finally, once the fault is isolated, a specific compensator is introduced. In this setup, multiple faults could be treated simultaneously since their effect is additive. Design strategies for a nominal condition and under model uncertainty are presented in the paper. In addition, performance indices are also introduced to evaluate the resulting fault-tolerant scheme for detection, isolation, and accommodation. Hard thresholds are suggested for detection and isolation purposes, meanwhile, adaptive ones are considered under model uncertainty to reduce the conservativeness. A complete simulation evaluation is carried out for a DC motor setup.

  7. "3D_Fault_Offsets," a Matlab Code to Automatically Measure Lateral and Vertical Fault Offsets in Topographic Data: Application to San Andreas, Owens Valley, and Hope Faults

    Science.gov (United States)

    Stewart, N.; Gaudemer, Y.; Manighetti, I.; Serreau, L.; Vincendeau, A.; Dominguez, S.; Mattéo, L.; Malavieille, J.

    2018-01-01

    Measuring fault offsets preserved at the ground surface is of primary importance to recover earthquake and long-term slip distributions and understand fault mechanics. The recent explosion of high-resolution topographic data, such as Lidar and photogrammetric digital elevation models, offers an unprecedented opportunity to measure dense collections of fault offsets. We have developed a new Matlab code, 3D_Fault_Offsets, to automate these measurements. In topographic data, 3D_Fault_Offsets mathematically identifies and represents nine of the most prominent geometric characteristics of common sublinear markers along faults (especially strike slip) in 3-D, such as the streambed (minimum elevation), top, free face and base of channel banks or scarps (minimum Laplacian, maximum gradient, and maximum Laplacian), and ridges (maximum elevation). By calculating best fit lines through the nine point clouds on either side of the fault, the code computes the lateral and vertical offsets between the piercing points of these lines onto the fault plane, providing nine lateral and nine vertical offset measures per marker. Through a Monte Carlo approach, the code calculates the total uncertainty on each offset. It then provides tools to statistically analyze the dense collection of measures and to reconstruct the prefaulted marker geometry in the horizontal and vertical planes. We applied 3D_Fault_Offsets to remeasure previously published offsets across 88 markers on the San Andreas, Owens Valley, and Hope faults. We obtained 5,454 lateral and vertical offset measures. These automatic measures compare well to prior ones, field and remote, while their rich record provides new insights on the preservation of fault displacements in the morphology.

  8. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  9. Fault Tolerant Position-mooring Control for Offshore Vessels

    DEFF Research Database (Denmark)

    Blanke, Mogens; Nguyen, Trong Dong

    2018-01-01

    Fault-tolerance is crucial to maintain safety in offshore operations. The objective of this paper is to show how systematic analysis and design of fault-tolerance is conducted for a complex automation system, exemplified by thruster assisted Position-mooring. Using redundancy as required....... Functional faults that are only detectable, are rendered isolable through an active isolation approach. Once functional faults are isolated, they are handled by fault accommodation techniques to meet overall control objectives specified by class requirements. The paper illustrates the generic methodology...... by a system to handle faults in mooring lines, sensors or thrusters. Simulations and model basin experiments are carried out to validate the concept for scenarios with single or multiple faults. The results demonstrate that enhanced availability and safety are obtainable with this design approach. While...

  10. Fault isolability conditions for linear systems with additive faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  11. Fault-tolerant Control of a Cyber-physical System

    Science.gov (United States)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  12. Communication Characteristics of Faulted Overhead High Voltage Power Lines at Low Radio Frequencies

    Directory of Open Access Journals (Sweden)

    Nermin Suljanović

    2017-11-01

    Full Text Available This paper derives a model of high-voltage overhead power line under fault conditions at low radio frequencies. The derived model is essential for design of communication systems to reliably transfer information over high voltage power lines. In addition, the model can also benefit advanced systems for power-line fault detection and classification exploiting the phenomenon of changed conditions on faulted power line, resulting in change of low radio frequency signal propagation. The methodology used in the paper is based on the multiconductor system analysis and propagation of electromagnetic waves over the power lines. The model for the high voltage power line under normal operation is validated using actual measurements obtained on 400 kV power line. The proposed model of faulted power lines extends the validated power-line model under normal operation. Simulation results are provided for typical power line faults and typical fault locations. Results clearly indicate sensitivity of power-line frequency response on different fault types.

  13. What does fault tolerant Deep Learning need from MPI?

    Energy Technology Data Exchange (ETDEWEB)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.; Daily, Jeffrey A.

    2017-09-25

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for a fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.

  14. Differential Fault Analysis on CLEFIA

    Science.gov (United States)

    Chen, Hua; Wu, Wenling; Feng, Dengguo

    CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

  15. Earthquake rupture process recreated from a natural fault surface

    Science.gov (United States)

    Parsons, Thomas E.; Minasian, Diane L.

    2015-01-01

    What exactly happens on the rupture surface as an earthquake nucleates, spreads, and stops? We cannot observe this directly, and models depend on assumptions about physical conditions and geometry at depth. We thus measure a natural fault surface and use its 3D coordinates to construct a replica at 0.1 m resolution to obviate geometry uncertainty. We can recreate stick-slip behavior on the resulting finite element model that depends solely on observed fault geometry. We clamp the fault together and apply steady state tectonic stress until seismic slip initiates and terminates. Our recreated M~1 earthquake initiates at contact points where there are steep surface gradients because infinitesimal lateral displacements reduce clamping stress most efficiently there. Unclamping enables accelerating slip to spread across the surface, but the fault soon jams up because its uneven, anisotropic shape begins to juxtapose new high-relief sticking points. These contacts would ultimately need to be sheared off or strongly deformed before another similar earthquake could occur. Our model shows that an important role is played by fault-wall geometry, though we do not include effects of varying fluid pressure or exotic rheologies on the fault surfaces. We extrapolate our results to large fault systems using observed self-similarity properties, and suggest that larger ruptures might begin and end in a similar way, though the scale of geometrical variation in fault shape that can arrest a rupture necessarily scales with magnitude. In other words, fault segmentation may be a magnitude dependent phenomenon and could vary with each subsequent rupture.

  16. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  17. Fault diagnosis and fault-tolerant control based on adaptive control approach

    CERN Document Server

    Shen, Qikun; Shi, Peng

    2017-01-01

    This book provides recent theoretical developments in and practical applications of fault diagnosis and fault tolerant control for complex dynamical systems, including uncertain systems, linear and nonlinear systems. Combining adaptive control technique with other control methodologies, it investigates the problems of fault diagnosis and fault tolerant control for uncertain dynamic systems with or without time delay. As such, the book provides readers a solid understanding of fault diagnosis and fault tolerant control based on adaptive control technology. Given its depth and breadth, it is well suited for undergraduate and graduate courses on linear system theory, nonlinear system theory, fault diagnosis and fault tolerant control techniques. Further, it can be used as a reference source for academic research on fault diagnosis and fault tolerant control, and for postgraduates in the field of control theory and engineering. .

  18. Mine-hoist active fault tolerant control system and strategy

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Z.; Wang, Y.; Meng, J.; Zhao, P.; Chang, Y. [China University of Mining and Technology, Xuzhou (China)] wzjsdstu@163.com

    2005-06-01

    Based on fault diagnosis and fault tolerant technologies, the mine-hoist active fault-tolerant control system (MAFCS) is presented with corresponding strategies, which includes the fault diagnosis module (FDM), the dynamic library (DL) and the fault-tolerant control model (FCM). When a fault is judged from some sensor by the FDM, FCM reconfigures the state of the MAFCS by calling the parameters from all sub libraries in DL, in order to ensure the reliability and safety of the mine hoist. The simulating result shows that MAFCS is of certain intelligence, which can adopt the corresponding control strategies according to different fault modes, even when there is quite a difference between the real data and the prior fault modes. 7 refs., 5 figs., 1 tab.

  19. A nonlinear least-squares inverse analysis of strike-slip faulting with application to the San Andreas fault

    Science.gov (United States)

    Williams, Charles A.; Richardson, Randall M.

    1988-01-01

    A nonlinear weighted least-squares analysis was performed for a synthetic elastic layer over a viscoelastic half-space model of strike-slip faulting. Also, an inversion of strain rate data was attempted for the locked portions of the San Andreas fault in California. Based on an eigenvector analysis of synthetic data, it is found that the only parameter which can be resolved is the average shear modulus of the elastic layer and viscoelastic half-space. The other parameters were obtained by performing a suite of inversions for the fault. The inversions on data from the northern San Andreas resulted in predicted parameter ranges similar to those produced by inversions on data from the whole fault.

  20. A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol

    Science.gov (United States)

    Malekpour, Mahyar R.

    2015-01-01

    This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.

  1. A summary of the active fault investigation in the extension sea area of Kikugawa fault and the Nishiyama fault , N-S direction fault in south west Japan

    Science.gov (United States)

    Abe, S.

    2010-12-01

    In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we

  2. Automatic fault tree generation in the EPR PSA project

    International Nuclear Information System (INIS)

    Villatte, N; Nonclercq, P.; Taupy, S.

    2012-01-01

    Tools (KB3 and Atelier EPS) have been developed at EDF to assist the analysts in building fault trees for PSA (Probabilistic Safety Assessment) and importing them into RiskSpectrum (RiskSpectrum is a Swedish code used at EDF for PSA). System modelling is performed using KB3 software with a knowledge base describing generic classes of components with their behaviour and failure modes. Using these classes of components, the analyst can describe (using a graphical system editor): a simplified system diagram from the mechanical system drawings and functional descriptions, the missions of the studied system (in a form of high level fault trees) and its different configurations for the missions. He can also add specific knowledge about the system. Then, the analyst chooses missions and configurations to specify and launch fault trees generations. From the system description, KB3 produces by backward-chaining on rules, detailed system fault trees. These fault trees are finally imported into RiskSpectrum (they are converted by Atelier EPS into a format readable by RiskSpectrum). KB3 and Atelier EPS have been used to create the majority of the fault trees for the EDF EPR Probabilistic Safety Analysis conducted from November 2009 to March 2010. 25 systems were modelled, and 127 fault trees were automatically generated in a rather short time by different analysts with the help of these tools. A feedback shows a lot of advantages to use KB3 and Atelier EPS: homogeneity and consistency between the different generated fault trees, traceability of modelling, control of modelling and last but not least: the automation of detailed fault tree creation relieves the human analyst of this tedious task so that he can focus his attention on more important tasks: modelling the failure of a function. This industrial application has also helped us gather an interesting feedback from the analysts that should help us improve the handling of the tools. We propose in this paper indeed some

  3. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  4. Characterization of the San Andreas Fault near Parkfield, California by fault-zone trapped waves

    Science.gov (United States)

    Li, Y.; Vidale, J.; Cochran, E.

    2003-04-01

    by M6 earthquake episode at Parkfield although it probably represents the accumulated wear from many previous great earthquakes and other kinematical processes. The width of low-velocity waveguide likely represents the damage extent in dynamic rupture, consistent with the scale of process zone size to rupture length as existing model predicted. The variation in velocity reduction along the fault zone indicates an inference of changes in on-fault stress, fine-scale fault geometry, and fluid content at depths. On the other hand, a less developed and narrower low-velocity waveguide is on the north strand that experienced minor breaks at surface in the 1966 M6 event probably due to energy partitioning, strong shaking and dynamic strain by the earthquake on the main fault.

  5. Modeling of fluid injection and withdrawal induced fault activation using discrete element based hydro-mechanical and dynamic coupled simulator

    Science.gov (United States)

    Yoon, Jeoung Seok; Zang, Arno; Zimmermann, Günter; Stephansson, Ove

    2016-04-01

    Operation of fluid injection into and withdrawal from the subsurface for various purposes has been known to induce earthquakes. Such operations include hydraulic fracturing for shale gas extraction, hydraulic stimulation for Enhanced Geothermal System development and waste water disposal. Among these, several damaging earthquakes have been reported in the USA in particular in the areas of high-rate massive amount of wastewater injection [1] mostly with natural fault systems. Oil and gas production have been known to induce earthquake where pore fluid pressure decreases in some cases by several tens of Mega Pascal. One recent seismic event occurred in November 2013 near Azle, Texas where a series of earthquakes began along a mapped ancient fault system [2]. It was studied that a combination of brine production and waste water injection near the fault generated subsurface pressures sufficient to induced earthquakes on near-critically stressed faults. This numerical study aims at investigating the occurrence mechanisms of such earthquakes induced by fluid injection [3] and withdrawal by using hydro-geomechanical coupled dynamic simulator (Itasca's Particle Flow Code 2D). Generic models are setup to investigate the sensitivity of several parameters which include fault orientation, frictional properties, distance from the injection well to the fault, amount of fluid withdrawal around the injection well, to the response of the fault systems and the activation magnitude. Fault slip movement over time in relation to the diffusion of pore pressure is analyzed in detail. Moreover, correlations between the spatial distribution of pore pressure change and the locations of induced seismic events and fault slip rate are investigated. References [1] Keranen KM, Weingarten M, Albers GA, Bekins BA, Ge S, 2014. Sharp increase in central Oklahoma seismicity since 2008 induced by massive wastewater injection, Science 345, 448, DOI: 10.1126/science.1255802. [2] Hornbach MJ, DeShon HR

  6. Numerical modeling of fracking fluid and methane migration through fault zones in shale gas reservoirs

    Science.gov (United States)

    Taherdangkoo, Reza; Tatomir, Alexandru; Sauter, Martin

    2017-04-01

    Hydraulic fracturing operation in shale gas reservoir has gained growing interest over the last few years. Groundwater contamination is one of the most important environmental concerns that have emerged surrounding shale gas development (Reagan et al., 2015). The potential impacts of hydraulic fracturing could be studied through the possible pathways for subsurface migration of contaminants towards overlying aquifers (Kissinger et al., 2013; Myers, 2012). The intent of this study is to investigate, by means of numerical simulation, two failure scenarios which are based on the presence of a fault zone that penetrates the full thickness of overburden and connect shale gas reservoir to aquifer. Scenario 1 addresses the potential transport of fracturing fluid from the shale into the subsurface. This scenario was modeled with COMSOL Multiphysics software. Scenario 2 deals with the leakage of methane from the reservoir into the overburden. The numerical modeling of this scenario was implemented in DuMux (free and open-source software), discrete fracture model (DFM) simulator (Tatomir, 2012). The modeling results are used to evaluate the influence of several important parameters (reservoir pressure, aquifer-reservoir separation thickness, fault zone inclination, porosity, permeability, etc.) that could affect the fluid transport through the fault zone. Furthermore, we determined the main transport mechanisms and circumstances in which would allow frack fluid or methane migrate through the fault zone into geological layers. The results show that presence of a conductive fault could reduce the contaminant travel time and a significant contaminant leakage, under certain hydraulic conditions, is most likely to occur. Bibliography Kissinger, A., Helmig, R., Ebigbo, A., Class, H., Lange, T., Sauter, M., Heitfeld, M., Klünker, J., Jahnke, W., 2013. Hydraulic fracturing in unconventional gas reservoirs: risks in the geological system, part 2. Environ Earth Sci 70, 3855

  7. Slip Potential of Faults in the Fort Worth Basin

    Science.gov (United States)

    Hennings, P.; Osmond, J.; Lund Snee, J. E.; Zoback, M. D.

    2017-12-01

    Similar to other areas of the southcentral United States, the Fort Worth Basin of NE Texas has experienced an increase in the rate of seismicity which has been attributed to injection of waste water in deep saline aquifers. To assess the hazard of induced seismicity in the basin we have integrated new data on location and character of previously known and unknown faults, stress state, and pore pressure to produce an assessment of fault slip potential which can be used to investigate prior and ongoing earthquake sequences and for development of mitigation strategies. We have assembled data on faults in the basin from published sources, 2D and 3D seismic data, and interpretations provided from petroleum operators to yield a 3D fault model with 292 faults ranging in strike-length from 116 to 0.4 km. The faults have mostly normal geometries, all cut the disposal intervals, and most are presumed to cut into the underlying crystalline and metamorphic basement. Analysis of outcrops along the SW flank of the basin assist with geometric characterization of the fault systems. The interpretation of stress state comes from integration of wellbore image and sonic data, reservoir stimulation data, and earthquake focal mechanisms. The orientation of SHmax is generally uniform across the basin but stress style changes from being more strike-slip in the NE part of the basin to normal faulting in the SW part. Estimates of pore pressure come from a basin-scale hydrogeologic model as history-matched to injection test data. With these deterministic inputs and appropriate ranges of uncertainty we assess the conditional probability that faults in our 3D model might slip via Mohr-Coulomb reactivation in response to increases in injected-related pore pressure. A key component of the analysis is constraining the uncertainties associated with each of the principal parameters. Many of the faults in the model are interpreted to be critically-stressed within reasonable ranges of uncertainty.

  8. Re-evaluating fault zone evolution, geometry, and slip rate along the restraining bend of the southern San Andreas Fault Zone

    Science.gov (United States)

    Blisniuk, K.; Fosdick, J. C.; Balco, G.; Stone, J. O.

    2017-12-01

    This study presents new multi-proxy data to provide an alternative interpretation of the late -to-mid Quaternary evolution, geometry, and slip rate of the southern San Andreas fault zone, comprising of the Garnet Hill, Banning, and Mission Creek fault strands, along its restraining bend near the San Bernardino Mountains and San Gorgonio Pass. Present geologic and geomorphic studies in the region indicate that as the Mission Creek and Banning faults diverge from one another in the southern Indio Hills, the Banning Fault Strand accommodates the majority of lateral displacement across the San Andreas Fault Zone. In this currently favored kinematic model of the southern San Andreas Fault Zone, slip along the Mission Creek Fault Strand decreases significantly northwestward toward the San Gorgonio Pass. Along this restraining bend, the Mission Creek Fault Strand is considered to be inactive since the late -to-mid Quaternary ( 500-150 kya) due to the transfer of plate boundary strain westward to the Banning and Garnet Hills Fault Strands, the Jacinto Fault Zone, and northeastward, to the Eastern California Shear Zone. Here, we present a revised geomorphic interpretation of fault displacement, initial 36Cl/10Be burial ages, sediment provenance data, and detrital geochronology from modern catchments and displaced Quaternary deposits that improve across-fault correlations. We hypothesize that continuous large-scale translation of this structure has occurred throughout its history into the present. Accordingly, the Mission Creek Fault Strand is active and likely a primary plate boundary fault at this latitude.

  9. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  10. Cafts: computer aided fault tree analysis

    International Nuclear Information System (INIS)

    Poucet, A.

    1985-01-01

    The fault tree technique has become a standard tool for the analysis of safety and reliability of complex system. In spite of the costs, which may be high for a complete and detailed analysis of a complex plant, the fault tree technique is popular and its benefits are fully recognized. Due to this applications of these codes have mostly been restricted to simple academic examples and rarely concern complex, real world systems. In this paper an interactive approach to fault tree construction is presented. The aim is not to replace the analyst, but to offer him an intelligent tool which can assist him in modeling complex systems. Using the CAFTS-method, the analyst interactively constructs a fault tree in two phases: (1) In a first phase he generates an overall failure logic structure of the system; the macrofault tree. In this phase, CAFTS features an expert system approach to assist the analyst. It makes use of a knowledge base containing generic rules on the behavior of subsystems and components; (2) In a second phase the macrofault tree is further refined and transformed in a fully detailed and quantified fault tree. In this phase a library of plant-specific component failure models is used

  11. The use of outcrop data in fault prediction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Steen, Oeystein

    1997-12-31

    This thesis begins by describing deformation structures formed by gravitational sliding in partially lithified sediments by studying the spatial variation in frequency of deformation structures, as well as their geometries and kinematics, the sequential development of an ancient slide is outlined. This study brings to light a complex deformation history which was associated with block gliding, involving folding, listric faulting, small-scale boudinage and clastic dyke injection. The collapse deformation which is documented in the basal part of a gliding sheet is described for the first time. Further, rift-related normal faults formed in a continental sequence of normal beds are described and there is a focus on the scaling behaviour of faults in variably cemented sandstones. It is shown that the displacement population coefficients of faults are influenced by the local lithology and hence scaling of faults is not uniform on all scales and is variable in different parts of a rock volume. The scaling behaviour of small faults is linked to mechanical heterogeneities in the rock and to the deformation style. It is shown that small faults occur in an aureole around larger faults. Strain and scaling of the small faults were measured in different structural positions relative to the major faults. The local strain field is found to be variable and can be correlated with drag folding along the master faults. A modeling approach is presented for prediction of small faults in a hydrocarbon reservoir. By modeling an outcrop bedding surface on a seismic workstation, outcrop data could be compared with seismic data. Further, well data were used to test the relationships inferred from the analogue outcrops. The study shows that seismic ductile strain can be correlated with the distribution of small faults. Moreover, the use of horizontal structural well data is shown to calibrate the structural interpretation of faulted seismic horizons. 133 refs., 64 figs., 3 tabs.

  12. Southern San Andreas Fault evaluation field activity: approaches to measuring small geomorphic offsets--challenges and recommendations for active fault studies

    Science.gov (United States)

    Scharer, Katherine M.; Salisbury, J. Barrett; Arrowsmith, J. Ramon; Rockwell, Thomas K.

    2014-01-01

    In southern California, where fast slip rates and sparse vegetation contribute to crisp expression of faults and microtopography, field and high‐resolution topographic data (fault, analyze the offset values for concentrations or trends along strike, and infer that the common magnitudes reflect successive surface‐rupturing earthquakes along that fault section. Wallace (1968) introduced the use of such offsets, and the challenges in interpreting their “unique complex history” with offsets on the Carrizo section of the San Andreas fault; these were more fully mapped by Sieh (1978) and followed by similar field studies along other faults (e.g., Lindvall et al., 1989; McGill and Sieh, 1991). Results from such compilations spurred the development of classic fault behavior models, notably the characteristic earthquake and slip‐patch models, and thus constitute an important component of the long‐standing contrast between magnitude–frequency models (Schwartz and Coppersmith, 1984; Sieh, 1996; Hecker et al., 2013). The proliferation of offset datasets has led earthquake geologists to examine the methods and approaches for measuring these offsets, uncertainties associated with measurement of such features, and quality ranking schemes (Arrowsmith and Rockwell, 2012; Salisbury, Arrowsmith, et al., 2012; Gold et al., 2013; Madden et al., 2013). In light of this, the Southern San Andreas Fault Evaluation (SoSAFE) project at the Southern California Earthquake Center (SCEC) organized a combined field activity and workshop (the “Fieldshop”) to measure offsets, compare techniques, and explore differences in interpretation. A thorough analysis of the measurements from the field activity will be provided separately; this paper discusses the complications presented by such offset measurements using two channels from the San Andreas fault as illustrative cases. We conclude with best approaches for future data collection efforts based on input from the Fieldshop.

  13. A Fault Diagnosis Approach for the Hydraulic System by Artificial Neural Networks

    OpenAIRE

    Xiangyu He; Shanghong He

    2014-01-01

    Based on artificial neural networks, a fault diagnosis approach for the hydraulic system was proposed in this paper. Normal state samples were used as the training data to develop a dynamic general regression neural network (DGRNN) model. The trained DGRNN model then served as the fault determinant to diagnose test faults and the work condition of the hydraulic system was identified. Several typical faults of the hydraulic system were used to verify the fault diagnosis approach. Experiment re...

  14. Faults in Linux

    DEFF Research Database (Denmark)

    Palix, Nicolas Jean-Michel; Thomas, Gaël; Saha, Suman

    2011-01-01

    In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

  15. Fault Diagnosis in Deaerator Using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    S Srinivasan

    2007-01-01

    Full Text Available In this paper a fuzzy logic based fault diagnosis system for a deaerator in a power plant unit is presented. The system parameters are obtained using the linearised state space deaerator model. The fuzzy inference system is created and rule base are evaluated relating the parameters to the type and severity of the faults. These rules are fired for specific changes in system parameters and the faults are diagnosed.

  16. Fault classification method for the driving safety of electrified vehicles

    Science.gov (United States)

    Wanner, Daniel; Drugge, Lars; Stensson Trigell, Annika

    2014-05-01

    A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.

  17. Dynamical instability produces transform faults at mid-ocean ridges.

    Science.gov (United States)

    Gerya, Taras

    2010-08-27

    Transform faults at mid-ocean ridges--one of the most striking, yet enigmatic features of terrestrial plate tectonics--are considered to be the inherited product of preexisting fault structures. Ridge offsets along these faults therefore should remain constant with time. Here, numerical models suggest that transform faults are actively developing and result from dynamical instability of constructive plate boundaries, irrespective of previous structure. Boundary instability from asymmetric plate growth can spontaneously start in alternate directions along successive ridge sections; the resultant curved ridges become transform faults within a few million years. Fracture-related rheological weakening stabilizes ridge-parallel detachment faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps.

  18. Fault Detection and Isolation using Eigenstructure Assignment

    DEFF Research Database (Denmark)

    Jørgensen, R. B.; Patton, R.; Chen, J.

    1994-01-01

    The purpose of this article is to investigate the robustness to model uncertainties of observer based fault detection and isolation. The approach is designed with a straight forward dynamic nad the observer.......The purpose of this article is to investigate the robustness to model uncertainties of observer based fault detection and isolation. The approach is designed with a straight forward dynamic nad the observer....

  19. Model predictive and reallocation problem for CubeSat fault recovery and attitude control

    Science.gov (United States)

    Franchi, Loris; Feruglio, Lorenzo; Mozzillo, Raffaele; Corpino, Sabrina

    2018-01-01

    In recent years, thanks to the increase of the know-how on machine-learning techniques and the advance of the computational capabilities of on-board processing, expensive computing algorithms, such as Model Predictive Control, have begun to spread in space applications even on small on-board processor. The paper presents an algorithm for an optimal fault recovery of a 3U CubeSat, developed in MathWorks Matlab & Simulink environment. This algorithm involves optimization techniques aiming at obtaining the optimal recovery solution, and involves a Model Predictive Control approach for the attitude control. The simulated system is a CubeSat in Low Earth Orbit: the attitude control is performed with three magnetic torquers and a single reaction wheel. The simulation neglects the errors in the attitude determination of the satellite, and focuses on the recovery approach and control method. The optimal recovery approach takes advantage of the properties of magnetic actuation, which gives the possibility of the redistribution of the control action when a fault occurs on a single magnetic torquer, even in absence of redundant actuators. In addition, the paper presents the results of the implementation of Model Predictive approach to control the attitude of the satellite.

  20. 3D Modelling of Seismically Active Parts of Underground Faults via Seismic Data Mining

    Science.gov (United States)

    Frantzeskakis, Theofanis; Konstantaras, Anthony

    2015-04-01

    During the last few years rapid steps have been taken towards drilling for oil in the western Mediterranean sea. Since most of the countries in the region benefit mainly from tourism and considering that the Mediterranean is a closed sea only replenishing its water once every ninety years careful measures are being taken to ensure safe drilling. In that concept this research work attempts to derive a three dimensional model of the seismically active parts of the underlying underground faults in areas of petroleum interest. For that purpose seismic spatio-temporal clustering has been applied to seismic data to identify potential distinct seismic regions in the area of interest. Results have been coalesced with two dimensional maps of underground faults from past surveys and seismic epicentres, having followed careful reallocation processing, have been used to provide information regarding the vertical extent of multiple underground faults in the region of interest. The end product is a three dimensional map of the possible underground location and extent of the seismically active parts of underground faults. Indexing terms: underground faults modelling, seismic data mining, 3D visualisation, active seismic source mapping, seismic hazard evaluation, dangerous phenomena modelling Acknowledgment This research work is supported by the ESPA Operational Programme, Education and Life Long Learning, Students Practical Placement Initiative. References [1] Alves, T.M., Kokinou, E. and Zodiatis, G.: 'A three-step model to assess shoreline and offshore susceptibility to oil spills: The South Aegean (Crete) as an analogue for confined marine basins', Marine Pollution Bulletin, In Press, 2014 [2] Ciappa, A., Costabile, S.: 'Oil spill hazard assessment using a reverse trajectory method for the Egadi marine protected area (Central Mediterranean Sea)', Marine Pollution Bulletin, vol. 84 (1-2), pp. 44-55, 2014 [3] Ganas, A., Karastathis, V., Moshou, A., Valkaniotis, S., Mouzakiotis