WorldWideScience

Sample records for monte aquila fault

  1. New paleoseismic data across the Mt. Marine Fault between the 2016 Amatrice and 2009 L’Aquila seismic sequences (central Apennines

    Directory of Open Access Journals (Sweden)

    Marco Moro

    2016-11-01

    Full Text Available Paleoseismological investigations have been carried out along the Mt. Marine normal fault, a probable source of the February 2, 1703 (Me=6.7 earthquake. The fault affects the area between the 2016 Amatrice and 2009 L’Aquila seismic sequences. Paleoseismological analysis provides data which corroborate previous studies, highlighting the occurrence of 5 events of surface faulting after the 6th–5th millenium B.C., the most recent of which is probably the 2 February 1703 earthquake. A minimum displacement per event of about 0.35 m has been measured. The occurrence of a minimum four faulting events within the last 7,000 years suggests a maximum 1,700 years recurrence interval.

  2. The 2009 MW MW 6.1 L'Aquila fault system imaged by 64k earthquake locations

    International Nuclear Information System (INIS)

    Valoroso, Luisa

    2016-01-01

    On April 6 2009, a MW 6.1 normal-faulting earthquake struck the axial area of the Abruzzo region in central Italy. We investigate the complex architecture and mechanics of the activated fault system by using 64k high-resolution foreshock and aftershock locations. The fault system is composed by two major SW dipping segments forming an en-echelon NW trending system about 50 km long: the high-angle L’Aquila fault and the listric Campotosto fault, located in the first 10 km depth. From the beginning of 2009, fore shocks activated the deepest portion of the main shock fault. A week before the MW 6.1 event, the largest (MW 4.0) foreshock triggered seismicity migration along a minor off-fault segment. Seismicity jumped back to the main plane a few hours before the main shock. High-precision locations allowed to peer into the fault zone showing complex geological structures from the metre to the kilometre scale, analogous to those observed by field studies and seismic profiles. Also, we were able to investigate important aspects of earthquakes nucleation and propagation through the upper crust in carbonate-bearing rocks such as: the role of fluids in normal-faulting earthquakes; how crustal faults terminate at depths; the key role of fault zone structure in the earthquake rupture evolution processes.

  3. Physical and Transport Property Variations Within Carbonate-Bearing Fault Zones: Insights From the Monte Maggio Fault (Central Italy)

    Science.gov (United States)

    Trippetta, F.; Carpenter, B. M.; Mollo, S.; Scuderi, M. M.; Scarlato, P.; Collettini, C.

    2017-11-01

    The physical characterization of carbonate-bearing normal faults is fundamental for resource development and seismic hazard. Here we report laboratory measurements of density, porosity, Vp, Vs, elastic moduli, and permeability for a range of effective confining pressures (0.1-100 MPa), conducted on samples representing different structural domains of a carbonate-bearing fault. We find a reduction in porosity from the fault breccia (11.7% total and 6.2% connected) to the main fault plane (9% total and 3.5% connected), with both domains showing higher porosity compared to the protolith (6.8% total and 1.1% connected). With increasing confining pressure, P wave velocity evolves from 4.5 to 5.9 km/s in the fault breccia, is constant at 5.9 km/s approaching the fault plane and is low (4.9 km/s) in clay-rich fault domains. We find that while the fault breccia shows pressure sensitive behavior (a reduction in permeability from 2 × 10-16 to 2 × 10-17 m2), the cemented cataclasite close to the fault plane is characterized by pressure-independent behavior (permeability 4 × 10-17 m2). Our results indicate that the deformation processes occurring within the different fault structural domains influence the physical and transport properties of the fault zone. In situ Vp profiles match well the laboratory measurements demonstrating that laboratory data are valuable for implications at larger scale. Combining the experimental values of elastic moduli and frictional properties it results that at shallow crustal levels, M ≤ 1 earthquakes are less favored, in agreement with earthquake-depth distribution during the L'Aquila 2009 seismic sequence that occurred on carbonates.

  4. Stacking fault growth of FCC crystal: The Monte-Carlo simulation approach

    International Nuclear Information System (INIS)

    Jian Jianmin; Ming Naiben

    1988-03-01

    The Monte-Carlo method has been used to simulate the growth of the FCC (111) crystal surface, on which is presented the outcrop of a stacking fault. The comparison of the growth rates has been made between the stacking fault containing surface and the perfect surface. The successive growth stages have been simulated. It is concluded that the outcrop of stacking fault on the crystal surface can act as a self-perpetuating step generating source. (author). 7 refs, 3 figs

  5. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  6. Fault Risk Assessment of Underwater Vehicle Steering System Based on Virtual Prototyping and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    He Deyu

    2016-09-01

    Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.

  7. How Might Draining Lake Campotosto Affect Stress and Seismicity on the Monte Gorzano Normal Fault, Central Italy?

    Science.gov (United States)

    Verdecchia, A.; Deng, K.; Harrington, R. M.; Liu, Y.

    2017-12-01

    It is broadly accepted that large variations of water level in reservoirs may affect the stress state on nearby faults. While most studies consider the relationship between lake impoundment and the occurrence of large earthquakes or seismicity rate increases in the surrounding region, very few examples focus on the effects of lake drainage. The second largest reservoir in Europe, Lake Campotosto, is located on the hanging wall of the Monte Gorzano fault, an active normal fault responsible for at least two M ≥ 6 earthquakes in historical times. The northern part of this fault ruptured during the August 24, 2016, Mw 6.0 Amatrice earthquake, increasing the probability for a future large event on the southern section where an aftershock sequence is still ongoing. The proximity of the Campotosto reservoir to the active fault aroused general concern with respect to the stability of the three dams bounding the reservoir if the southern part of the Monte Gorzano fault produces a moderate earthquake. Local officials have proposed draining the reservoir as hazard mitigation strategy to avoid possible future catastrophes. In efforts to assess how draining the reservoir might affect earthquake nucleation on the fault, we use a finite-element poroelastic model to calculate the evolution of stress and pore pressure in terms of Coulomb stress changes that would be induced on the Monte Gorzano fault by emptying the Lake Campotosto reservoir. Preliminary results show that an instantaneous drainage of the lake will produce positive Coulomb stress changes, mostly on the shallower part of the fault (0 to 2 km), while a stress drop of the order of 0.2 bar is expected on the Monte Gorzano fault between 0 and 8 km depth. Earthquake hypocenters on the southern portion of the fault currently nucleate between 5 and 13 km depth, with activity distributed nearby the reservoir. Upcoming work will model the effects of varying fault geometry and elastic parameters, including geological

  8. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  9. Non-inductive components of electromagnetic signals associated with L'Aquila earthquake sequences estimated by means of inter-station impulse response functions

    Directory of Open Access Journals (Sweden)

    C. Di Lorenzo

    2011-04-01

    Full Text Available On 6 April 2009 at 01:32:39 UT a strong earthquake occurred west of L'Aquila at the very shallow depth of 9 km. The main shock local magnitude was Ml = 5.8 (Mw = 6.3. Several powerful aftershocks occurred the following days. The epicentre of the main shock occurred 6 km away from the Geomagnetic Observatory of L'Aquila, on a fault 15 km long having a NW-SE strike, about 140°, and a SW dip of about 42°. For this reason, L'Aquila seismic events offered very favourable conditions to detect possible electromagnetic emissions related to the earthquake. The data used in this work come from the permanent geomagnetic Observatories of L'Aquila and Duronia. Here the results concerning the analysis of the residual magnetic field estimated by means of the inter-station impulse response functions in the frequency band from 0.3 Hz to 3 Hz are shown.

  10. Precursory slow-slip loaded the 2009 L'Aquila earthquake sequence

    Science.gov (United States)

    Borghi, A.; Aoudia, A.; Javed, F.; Barzaghi, R.

    2016-05-01

    Slow-slip events (SSEs) are common at subduction zone faults where large mega earthquakes occur. We report here that one of the best-recorded moderate size continental earthquake, the 2009 April 6 moment magnitude (Mw) 6.3 L'Aquila (Italy) earthquake, was preceded by a 5.9 Mw SSE that originated from the decollement beneath the reactivated normal faulting system. The SSE is identified from a rigorous analysis of continuous GPS stations and occurred on the 12 February and lasted for almost two weeks. It coincided with a burst in the foreshock activity with small repeating earthquakes migrating towards the main-shock hypocentre as well as with a change in the elastic properties of rocks in the fault region. The SSE has caused substantial stress loading at seismogenic depths where the magnitude 4.0 foreshock and Mw 6.3 main shock nucleated. This stress loading is also spatially correlated with the lateral extent of the aftershock sequence.

  11. Aquila

    Science.gov (United States)

    Murdin, P.

    2000-11-01

    (the Eagle; abbrev. Aql, gen. Aquilae; area 652 sq. deg.) An equatorial constellation that lies between Sagitta and Sagittarius, and culminates at midnight in mid-July. Its origin dates back to Babylonian times and it is said to represent the eagle of Zeus in Greek mythology, which carried the thunderbolts that Zeus hurled at his enemies and which snatched up Ganymede to become cup-bearer to the g...

  12. SAFTAC, Monte-Carlo Fault Tree Simulation for System Design Performance and Optimization

    International Nuclear Information System (INIS)

    Crosetti, P.A.; Garcia de Viedma, L.

    1976-01-01

    1 - Description of problem or function: SAFTAC is a Monte Carlo fault tree simulation program that provides a systematic approach for analyzing system design, performing trade-off studies, and optimizing system changes or additions. 2 - Method of solution: SAFTAC assumes an exponential failure distribution for basic input events and a choice of either Gaussian distributed or constant repair times. The program views the system represented by the fault tree as a statistical assembly of independent basic input events, each characterized by an exponential failure distribution and, if used, a constant or normal repair distribution. 3 - Restrictions on the complexity of the problem: The program is dimensioned to handle 1100 basic input events and 1100 logical gates. It can be re-dimensioned to handle up to 2000 basic input events and 2000 logical gates within the existing core memory

  13. The LVD signals during the early-mid stages of the L'Aquila seismic sequence and the radon signature of some aftershocks of moderate magnitude

    International Nuclear Information System (INIS)

    Cigolini, C.; Laiolo, M.; Coppola, D.

    2015-01-01

    The L'Aquila seismic swarm culminated with the mainshock of April 6, 2009 (M L = 5.9). Here, we report and analyze the Large Volume Detector (LVD, used in neutrinos research) low energy traces (∼0.8 MeV), collected during the early-mid stages of the seismic sequence, together with the data of a radon monitoring experiment. The peaks of LVD traces do not correlate with the evolution and magnitude of earthquakes, including major aftershocks. Conversely, our radon measurements obtained by utilizing three automatic stations deployed along the regional NW–SE faulting system, seem to be, in one case, more efficient. In fact, the timeseries collected on the NW–SE Paganica fracture recorded marked variations and peaks that occurred during and prior moderate aftershocks (with M L > 3). The Paganica monitoring station (PGN) seems to better responds to active seismicity due to the fact that the radon detector was placed directly within the bedrock of an active fault. It is suggested that future networks for radon monitoring of active seismicity should preferentially implement this setting. - Highlights: • The April 9, 2009 Aquila earthquake (ML 5.9) had a remarkable echo in the media. • We report LVD traces together with the data of a radon monitoring experiment. • Radon emissions were measured by 3 automatic stations along the main NW–SE fault. • The one that better responds to seismicity was placed in the fault's bedrock. • Future networks for earthquake radon monitoring should implement this setting

  14. Micro-textures of Deformed Gouges by Friction Experiments of Mont Terri Main Fault, Switzerland

    Science.gov (United States)

    Aoki, K.; Seshimo, K.; Sakai, T.; Komine, Y.; Kametaka, M.; Watanabe, T.; Nussbaum, C.; Guglielmi, Y.

    2017-12-01

    Friction experiment was conducted on samples from the Main Fault of Mont Terri Rock Laboratory, Switzerland and then micro-textures of deformed gouges were observed using a scanning electron microscope JCM-6000 and JXA-8530F. Samples were taken at the depths of 47.2m and 37.3m of borehole BSF-1, and at 36.7m, 37.1m, 41.4m and 44.6m of borehole BSF-2, which were drilled from the drift floor at 260m depth from the surface. Friction experiment was conducted on above 6 samples using a rotary shear low to high-velocity friction apparatus at the Institute of Geology, China Earthquake Administration in Beijing at a normal stress of 3.95 to 4.0 MPa and at slip rates ranging 0.2 microns/s to 2.1mm/s. Cylindrical specimens of Ti-Al-V alloy, exhibiting similar behaviors as the host rock specimen, were used as rotary and stationary pistons of 40 mm diameter. A Teflon sleeve was used around the piston to confine the sample during the test. Main results are summarized as follows. 1) Mud rocks in Mont Terri drill holes (BFS-1, BFS-2) had steady-state or nearly steady-state friction coefficient μss in the range of 0.1 0.3 for wet gouges and 0. 5 0.7 for dry gouges. Friction coefficients of dry gouges were approximately twice as large as those of wet gouges. However, the fault rock (37.3 m, BFS-1) with scaly fabric showed no difference between wet and dry conditions : μss (wet): 0.50 0.77, μss (dry): 0.45 0.78. This is probably because the clay contents of this rock is less ( 33 %) than those in other rocks (67 73 %) (Shimamoto, 2017). 2) Deformed gouges are characterized by well-developed slip zones adjacent to the rotary and stationary pistons, accompanied by slickenside surfaces with clear striations. Such slickenside surfaces are similar to those developed in the drill core samples used in our experiments. 3) Multiple slip zones were observed in the 37.3m of BFS-1 and the 36.7m of BFS-2 samples under dry condition, suggesting that a slip occurred in the interior of the gouge

  15. Lessons of L'Aquila for Operational Earthquake Forecasting

    Science.gov (United States)

    Jordan, T. H.

    2012-12-01

    The L'Aquila earthquake of 6 Apr 2009 (magnitude 6.3) killed 309 people and left tens of thousands homeless. The mainshock was preceded by a vigorous seismic sequence that prompted informal earthquake predictions and evacuations. In an attempt to calm the population, the Italian Department of Civil Protection (DPC) convened its Commission on the Forecasting and Prevention of Major Risk (MRC) in L'Aquila on 31 March 2009 and issued statements about the hazard that were widely received as an "anti-alarm"; i.e., a deterministic prediction that there would not be a major earthquake. On October 23, 2012, a court in L'Aquila convicted the vice-director of DPC and six scientists and engineers who attended the MRC meeting on charges of criminal manslaughter, and it sentenced each to six years in prison. A few weeks after the L'Aquila disaster, the Italian government convened an International Commission on Earthquake Forecasting for Civil Protection (ICEF) with the mandate to assess the status of short-term forecasting methods and to recommend how they should be used in civil protection. The ICEF, which I chaired, issued its findings and recommendations on 2 Oct 2009 and published its final report, "Operational Earthquake Forecasting: Status of Knowledge and Guidelines for Implementation," in Aug 2011 (www.annalsofgeophysics.eu/index.php/annals/article/view/5350). As defined by the Commission, operational earthquake forecasting (OEF) involves two key activities: the continual updating of authoritative information about the future occurrence of potentially damaging earthquakes, and the officially sanctioned dissemination of this information to enhance earthquake preparedness in threatened communities. Among the main lessons of L'Aquila is the need to separate the role of science advisors, whose job is to provide objective information about natural hazards, from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms

  16. Stress and Strain Rates from Faults Reconstructed by Earthquakes Relocalization

    Science.gov (United States)

    Morra, G.; Chiaraluce, L.; Di Stefano, R.; Michele, M.; Cambiotti, G.; Yuen, D. A.; Brunsvik, B.

    2017-12-01

    Recurrence of main earthquakes on the same fault depends on kinematic setting, hosting lithologies and fault geometry and population. Northern and central Italy transitioned from convergence to post-orogenic extension. This has produced a unique and very complex tectonic setting characterized by superimposed normal faults, crossing different geologic domains, that allows to investigate a variety of seismic manifestations. In the past twenty years three seismic sequences (1997 Colfiorito, 2009 L'Aquila and 2016-17 Amatrice-Norcia-Visso) activated a 150km long normal fault system located between the central and northern apennines and allowing the recordings of thousands of seismic events. Both the 1997 and the 2009 main shocks were preceded by a series of small pre-shocks occurring in proximity to the future largest events. It has been proposed and modelled that the seismicity pattern of the two foreshocks sequences was caused by active dilatancy phenomenon, due to fluid flow in the source area. Seismic activity has continued intensively until three events with 6.0

  17. Calibration and performance testing of the IAEA Aquila Active Well Coincidence Counter (Unit 1)

    International Nuclear Information System (INIS)

    Menlove, H.O..; Siebelist, R.; Wenz, T.R.

    1996-01-01

    An Active Well Coincidence Counter (AWCC) and a portable shift register (PSR-B) produced by Aquila Technologies Group, Inc., have been tested and cross-calibrated with existing AWCCs used by the International Atomic Energy Agency (IAEA). This report summarizes the results of these tests and the cross-calibration of the detector. In addition, updated tables summarizing the cross-calibration of existing AWCCs and AmLi sources are also included. Using the Aquila PSR-B with existing IAEA software requires secondary software also supplied by Aquila to set up the PSR-B with the appropriate measurement parameters

  18. Testimonies to the L'Aquila earthquake (2009) and to the L'Aquila process

    Science.gov (United States)

    Kalenda, Pavel; Nemec, Vaclav

    2014-05-01

    Lot of confusions, misinformation, false solidarity, efforts to misuse geoethics and other unethical activities in favour of the top Italian seismologists responsible for a bad and superficial evaluation of the situation 6 days prior to the earthquake - that is a general characteristics for the whole period of 5 years separating us from the horrible morning of April 6, 2009 in L'Aquila with 309 human victims. The first author of this presentation as a seismologist had unusual opportunity to visit the unfortunate city in April 2009. He got all "first-hand" information that a real scientifically based prediction did exist already for some shocks in the area on March 29 and 30, 2009. The author of the prediction Gianpaolo Giuliani was obliged to stop any public information diffused by means of internet. A new prediction was known to him on March 31 - in the day when the "Commission of Great Risks" offered a public assurance that any immediate earthquake can be practically excluded. In reality the members of the commission completely ignored such a prediction declaring it as a false alarm of "somebody" (even without using the name of Giuliani). The observations by Giuliani were of high quality from the scientific point of view. G. Giuliani predicted L'Aquila earthquake in the professional way - for the first time during many years of observations. The anomalies, which preceded L'Aquila earthquake were detected on many places in Europe in the same time. The question is, what locality would be signed as potential focal area, if G. Giuliani would know the other observations in Europe. The deformation (and other) anomalies are observable before almost all of global M8 earthquakes. Earthquakes are preceded by deformation and are predictable. The testimony of the second author is based on many unfortunate personal experiences with representatives of the INGV Rome and their supporters from India and even Australia. In July 2010, prosecutor Fabio Picuti charged the Commission

  19. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    Science.gov (United States)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  20. Geochemical signature of paleofluids in microstructures from Main Fault in the Opalinus Clay of the Mont Terri rock laboratory, Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Clauer, N. [Laboratoire d’Hydrologie et de Géochimie de Strasbourg (CNRS-UdS), Strasbourg (France); Techer, I. [Equipe Associée, Chrome, Université de Nîmes, Nîmes (France); Nussbaum, Ch. [Swiss Geological Survey, Federal Office of Topography Swisstopo, Wabern (Switzerland); Laurich, B. [Structural Geology, Tectonics and Geomechanics, RWTH Aachen University, Aachen (Germany); Laurich, B. [Federal Institute for Geosciences and Natural Resources BGR, Hannover (Germany)

    2017-04-15

    The present study reports on elemental and Sr isotopic analyses of calcite and associated celestite infillings of various microtectonic features collected mostly in the Main Fault of the Opalinus Clay from Mont Terri rock laboratory. Based on a detailed microstructural description of veins, slickensides, scaly clay aggregates and gouges, the geochemical signatures of the infillings were compared to those of the leachates from undeformed Opalinus Clay, and to the calcite from veins crosscutting Hauptrogenstein, Passwang and Staffelegg Formations above and below the Opalinus Clay. Vein calcite and celestite from Main Fault yield identical {sup 87}Sr/{sup 86}Sr ratios that are also close to those recorded in the Opalinus Clay matrix inside the Main Fault, but different from those of the diffuse Opalinus Clay calcite outside the fault. These varied {sup 87}Sr/{sup 86}Sr ratios of the diffuse calcite evidence a lack of interaction among the associated connate waters and the flowing fluids characterized by a homogeneous Sr signature. The {sup 87}Sr/{sup 86}Sr homogeneity at 0.70774 ± 0.00001 (2σ) for the infillings of most microstructures in the Main Fault, as well as of veins from nearby limestone layer and sediments around the Opalinus Clay, claims for an 'infinite' homogeneous marine supply, whereas the gouge infillings apparently interacted with a fluid chemically more complex. According to the known regional paleogeographic evolution, two seawater supplies were inferred and documented in the Delémont Basin: either during the Priabonian (38-34 Ma ago) from western Bresse graben, and/or during the Rupelian (34-28 Ma ago) from northern Rhine Graben. The Rupelian seawater that yields a mean {sup 87}Sr/{sup 86}Sr signature significantly higher than those of the microstructural infillings seems not to be the appropriate source. Alternatively, Priabonian seawater yields a mean {sup 87}Sr/{sup 86}Sr ratio precisely matching that of the leachates from diffuse

  1. Un San Sebastiano di Silvestro dell’Aquila e un San Vito di Saturnino Gatti / A St. Sebastian by Silvestro dell’Aquila and a St. Vitus by Saturnino Gatti

    Directory of Open Access Journals (Sweden)

    Lorenzo Principi

    2015-06-01

    Full Text Available L’articolo si focalizza sull’attribuzione di un’inedita statua a Silvestro di Giacomo da Sulmona, meglio noto come Silvestro dell’Aquila (documentato dal 1471-1504 e un’altra a Saturnino Gatti (1463 circa-1518, protagonisti della scultura del Rinascimento in Abruzzo. La prima proposta riguarda un San Sebastiano ligneo, grande poco meno del vero, conservato nella chiesa di Santa Maria ad Nives di Rocca di Mezzo, principale centro dell’Altipiano delle Rocche e paese natale del celebre cardinale Amico Agnifili, committente di Silvestro di Giacomo. La seconda acquisizione concerne una scultura lignea grande al naturale raffigurante San Vito, rintracciata nell’omonima chiesa di Colle San Vito, nel comune di Tornimparte, situata a pochi passi dagli affreschi eseguiti da Saturnino Gatti tra il 1490 e il 1494 in San Panfilo a Villagrande. Grazie ad un’analisi dei diversi contesti in cui si generarono le sculture e soprattutto attraverso stringenti confronti con opere note del catalogo dei due artisti si può riferire la prima statua alla tarda produzione di Silvestro dell’Aquila e la seconda al periodo di maturità di Saturnino Gatti.   The article focuses on the attribution of two unpublished wooden statues respectively realized by two masters of Renaissance sculpture in Abruzzo: Silvestro di Giacomo, known as Silvestro dell’Aquila (whose activity is documented at L’Aquila from 1471 to 1504; and Saturnino Gatti. The scultpure attributed to Silvestro dell’Aquila portrays Saint Sebastian, and is of almost life-size dimensions. It was spotted out inside the church of Santa Maria ad Nives at Rocca di Mezzo, the most renowned village on the upland of «Le Rocche», in the nearbies of L’Aquila; it was the birthplace of Cardinal Amico Agnifili, who happened to be Silvestro’s patron. The second statue, by Saturnino Gatti, represents Saint Vitus and is hold in the homonymous church at Colle San Vito in the municipal district of

  2. L'Aquila's reconstruction challenges: has Italy learned from its previous earthquake disasters?

    Science.gov (United States)

    Ozerdem, Alpaslan; Rufini, Gianni

    2013-01-01

    Italy is an earthquake-prone country and its disaster emergency response experiences over the past few decades have varied greatly, with some being much more successful than others. Overall, however, its reconstruction efforts have been criticised for being ad hoc, delayed, ineffective, and untargeted. In addition, while the emergency relief response to the L'Aquila earthquake of 6 April 2009-the primary case study in this evaluation-seems to have been successful, the reconstruction initiative got off to a very problematic start. To explore the root causes of this phenomenon, the paper argues that, owing to the way in which Italian Prime Minister Silvio Berlusconi has politicised the process, the L'Aquila reconstruction endeavour is likely to suffer problems with local ownership, national/regional/municipal coordination, and corruption. It concludes with a set of recommendations aimed at addressing the pitfalls that may confront the L'Aquila reconstruction process over the next few years. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  3. Mental health in L'Aquila after the earthquake

    Directory of Open Access Journals (Sweden)

    Paolo Stratta

    2012-06-01

    Full Text Available INTRODUCTION: In the present work we describe the mental health condition of L'Aquila population in the aftermath of the earthquake in terms of structural, process and outcome perspectives. METHOD: Literature revision of the published reports on the L'Aquila earthquake has been performed. RESULTS: Although important psychological distress has been reported by the population, capacity of resilience can be observed. However if resilient mechanisms intervened in immediate aftermath of the earthquake, important dangers are conceivable in the current medium-long-term perspective due to the long-lasting alterations of day-to-day life and the disruption of social networks that can be well associated with mental health problems. CONCLUSIONS: In a condition such as an earthquake, the immediate physical, medical, and emergency rescue needs must be addressed initially. However training first responders to identify psychological distress symptoms would be important for mental health triage in the field.

  4. Survey for hemoparasites in imperial eagles (Aquila heliaca), steppe eagles (Aquila nipalensis), and white-tailed sea eagles (Haliaeetus albicilla) from Kazakhstan.

    Science.gov (United States)

    Leppert, Lynda L; Layman, Seth; Bragin, Evgeny A; Katzner, Todd

    2004-04-01

    Prevalence of hemoparasites has been investigated in many avian species throughout Europe and North America. Basic hematologic surveys are the first step toward evaluating whether host-parasite prevalences observed in North America and Europe occur elsewhere in the world. We collected blood smears from 94 nestling imperial eagles (Aquila heliaca), five nestling steppe eagles (Aquila nipalensis), and 14 nestling white-tailed sea eagles (Haliaeetus albicilla) at Naurzum Zapovednik (Naurzum National Nature Reserve) in Kazakhstan during the summers of 1999 and 2000. In 1999, six of 29 imperial eagles were infected with Lencocytozoon toddi. Five of 65 imperial eagles and one of 14 white-tailed sea eagle were infected with L. toddi in 2000. Furthermore, in 2000, one of 65 imperial eagles was infected with Haemoproteus sp. We found no parasites in steppe eagles in either year, and no bird had multiple-species infections. These data are important because few hematologic studies of these eagle species have been conducted.

  5. Latest Pleistocene to Holocene thrust faulting paleoearthquakes at Monte Netto (Brescia, Italy): lessons learned from the Middle Ages seismic events in the Po Plain

    Science.gov (United States)

    Michetti, Alessandro Maria; Berlusconi, Andrea; Livio, Franz; Sileo, Giancanio; Zerboni, Andrea; Serva, Leonello; Vittori, Eutizio; Rodnight, Helena; Spötl, Christoph

    2010-05-01

    The seismicity of the Po Plain in Northern Italy is characterized by two strong Middle Ages earthquakes, the 1117, I° X MCS Verona, and the December 25, 1222, I° IX-X Brescia, events. Historical reports from these events describe relevant coseismic environmental effects, such as drainage changes, ground rupture and landslides. Due to the difficult interpretation of intensity data from such old seismic events, considerable uncertainty exists about their source parameters, and therefore about their causative tectonic structures. In a recent review, Stucchi et al. (2008) concluded that 'the historical data do not significantly help to constrain the assessment of the seismogenic potential of the area, which remains one of the most unknown, although potentially dangerous, seismic areas of the Italian region'. This issue needs therefore to be addressed by using the archaeological and geological evidence of past earthquakes, that is, archeoseismology and paleoseismology. Earthquake damage to archaeological sites in the study area has been the subject of several recent papers. Here we focus on new paleoseismological evidence, and in particular on the first observation of Holocene paleoseismic surface faulting in the Po Plain identified at the Monte Netto site, located ca. 10 km S of Brescia, in the area where the highest damage from the Christmas 1222 earthquake have been recorded. Monte Netto is a small hill, ca. 30 m higher than the surrounding piedmont plain, which represent the top of a growing fault-related fold belonging to the Quaternary frontal sector of the Southern Alps; the causative deep structure is a N-verging back thrust, well imaged in the industrial seismic reflection profiles kindly made available by ENI E&P. New trenching investigations have been conducted at the Cava Danesi of Monte Netto in October 2009, focused on the 1:10 scale analysis of the upper part of the 7 m high mid-Pleistocene to Holocene stratigraphic section exposed along the quarry

  6. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  7. Measuring a truncated disk in Aquila X-1

    DEFF Research Database (Denmark)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe Kα line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner r...

  8. Usefulness of the Monte Carlo method in reliability calculations

    International Nuclear Information System (INIS)

    Lanore, J.M.; Kalli, H.

    1977-01-01

    Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels

  9. The August 24th 2016 Accumoli earthquake: surface faulting and Deep-Seated Gravitational Slope Deformation (DSGSD in the Monte Vettore area

    Directory of Open Access Journals (Sweden)

    Domenico Aringoli

    2016-11-01

    Full Text Available On August 24th 2016 a Mw=6.0 earthquake hit central Italy, with the epicenter located at the boundaries between Lazio, Marche, Abruzzi and Umbria regions, near the village of Accumoli (Rieti, Lazio. Immediately after the mainshock, this geological survey has been focused on the earthquake environmental effects related to the tectonic reactivation of the previously mapped active fault (i.e. primary, as well as secondary effects mostly related to the seismic shaking (e.g. landslides and fracturing in soil and rock.This paper brings data on superficial effects and some preliminary considerations about the interaction and possible relationship between surface faulting and the occurrence of Deep-Seated Gravitational Slope Deformation (DSGSD along the southern and western slope of Monte Vettore.

  10. Rare event simulation for dynamic fault trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  11. Rare Event Simulation for Dynamic Fault Trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette; Tonetta, Stefano; Schoitsch, Erwin; Bitsch, Friedemann

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  12. Assessment of lead exposure in Spanish imperial eagle (Aquila adalberti) from spent ammunition in central Spain

    Science.gov (United States)

    Fernandez, Julia Rodriguez-Ramos; Hofle, Ursula; Mateo, Rafael; de Francisco, Olga Nicolas; Abbott, Rachel; Acevedo, Pelayo; Blanco, Juan-Manuel

    2011-01-01

    The Spanish imperial eagle (Aquila adalberti) is found only in the Iberian Peninsula and is considered one of the most threatened birds of prey in Europe. Here we analyze lead concentrations in bones (n = 84), livers (n = 15), primary feathers (n = 69), secondary feathers (n = 71) and blood feathers (n = 14) of 85 individuals collected between 1997 and 2008 in central Spain. Three birds (3.6%) had bone lead concentration > 20 (mu or u)g/g and all livers were within background lead concentration. Bone lead concentrations increased with the age of the birds and were correlated with lead concentration in rachis of secondary feathers. Spatial aggregation of elevated bone lead concentration was found in some areas of Montes de Toledo. Lead concentrations in feathers were positively associated with the density of large game animals in the area where birds were found dead or injured. Discontinuous lead exposure in eagles was evidenced by differences in lead concentration in longitudinal portions of the rachis of feathers.

  13. Microstructural investigations on carbonate fault core rocks in active extensional fault zones from the central Apennines (Italy)

    Science.gov (United States)

    Cortinovis, Silvia; Balsamo, Fabrizio; Storti, Fabrizio

    2017-04-01

    The study of the microstructural and petrophysical evolution of cataclasites and gouges has a fundamental impact on both hydraulic and frictional properties of fault zones. In the last decades, growing attention has been payed to the characterization of carbonate fault core rocks due to the nucleation and propagation of coseismic ruptures in carbonate successions (e.g., Umbria-Marche 1997, L'Aquila 2009, Amatrice 2016 earthquakes in Central Apennines, Italy). Among several physical parameters, grain size and shape in fault core rocks are expected to control the way of sliding along the slip surfaces in active fault zones, thus influencing the propagation of coseismic ruptures during earthquakes. Nevertheless, the role of grain size and shape distribution evolution in controlling the weakening or strengthening behavior in seismogenic fault zones is still not fully understood also because a comprehensive database from natural fault cores is still missing. In this contribution, we present a preliminary study of seismogenic extensional fault zones in Central Apennines by combining detailed filed mapping with grain size and microstructural analysis of fault core rocks. Field mapping was aimed to describe the structural architecture of fault systems and the along-strike fault rock distribution and fracturing variations. In the laboratory we used a Malvern Mastersizer 3000 granulometer to obtain a precise grain size characterization of loose fault rocks combined with sieving for coarser size classes. In addition, we employed image analysis on thin sections to quantify the grain shape and size in cemented fault core rocks. The studied fault zones consist of an up to 5-10 m-thick fault core where most of slip is accommodated, surrounded by a tens-of-meters wide fractured damage zone. Fault core rocks consist of (1) loose to partially cemented breccias characterized by different grain size (from several cm up to mm) and variable grain shape (from very angular to sub

  14. Nova Aquilae 1982: the nature of its dust

    International Nuclear Information System (INIS)

    Longmore, A.J.; Williams, P.M.

    1984-01-01

    Infrared photometric measurements of Nova Aquilae 1982, covering a period from 37 to 261 days after its discovery, have been obtained. Thermal emission was present even from the first observation. The observations show that the conventional picture of dust forming in the nova ejecta does not apply in this case, and suggest that a re-examination of the infrared modelling of earlier novae would be worthwhile. (U.K.)

  15. Menopon gaillinae lice in the golden eagle (Aquila chrysaetos and Marsh harear (Circus aeruginosus in Najaf province, Iraq

    Directory of Open Access Journals (Sweden)

    Al-Fatlawi M. A. A

    2017-07-01

    Full Text Available Our study considered as the first work on ectoparasites of the Golden eagle (Aquila chrysaetos and Marsh harear (Circus aeruginosus in Iraq. Overall, we examined 17 eagles for the period from 01\\Nov\\2016 until 25\\Feb\\2017, out of which 4were found infected (23.5%. All infected birds were female. Aquila was hunted from Najaf sea area. Under the wing and between feathers of Aquila grossly examined for detect any parasites. Lice of genus Menopon gaillinae isolated from 4 eagles, from under the wing area. Infected eagles suffering from skin redness. 38 parasites isolated from infected eagle, we prepared a slide from these louse for spp. classification. This study was on the first hand record of shaft louse (M. gallinae in Golden eagle and Marsh harear in Iraq

  16. SEISMIC SITE RESPONSE ESTIMATION IN THE NEAR SOURCE REGION OF THE 2009 L’AQUILA, ITALY, EARTHQUAKE

    Science.gov (United States)

    Bertrand, E.; Azzara, R.; Bergamashi, F.; Bordoni, P.; Cara, F.; Cogliano, R.; Cultrera, G.; di Giulio, G.; Duval, A.; Fodarella, A.; Milana, G.; Pucillo, S.; Régnier, J.; Riccio, G.; Salichon, J.

    2009-12-01

    The 6th of April 2009, at 3:32 local time, a Mw 6.3 earthquake hit the Abruzzo region (central Italy) causing more than 300 casualties. The epicenter of the earthquake was 95km NE of Rome and 10km from the center of the city of L’Aquila, the administrative capital of the Abruzzo region. This city has a population of about 70,000 and was severely damaged by the earthquake, the total cost of the buildings damage being estimated around 3 Bn €. Historical masonry buildings particularly suffered from the seismic shaking, but some reinforced concrete structures from more modern construction were also heavily damaged. To better estimate the seismic solicitation of these structures during the earthquake, we deployed temporary arrays in the near source region. Downtown L’Aquila, as well as a rural quarter composed of ancient dwelling-centers located western L’Aquila (Roio area), have been instrumented. The array set up downtown consisted of nearly 25 stations including velocimetric and accelerometric sensors. In the Roio area, 6 stations operated for almost one month. The data has been processed in order to study the spectral ratios of the horizontal component of ground motion at the soil site and at a reference site, as well as the spectral ratio of the horizontal and the vertical movement at a single recording site. Downtown L’Aquila is set on a Quaternary fluvial terrace (breccias with limestone boulders and clasts in a marly matrix), which forms the left bank of the Aterno River and slopes down in the southwest direction towards the Aterno River. The alluvial are lying on lacustrine sediments reaching their maximum thickness (about 250m) in the center of L’Aquila. After De Luca et al. (2005), these quaternary deposits seem to lead in an important amplification factor in the low frequency range (0.5-0.6 Hz). However, the level of amplification varies strongly from one point to the other in the center of the city. This new experimentation allows new and more

  17. Structural damages of L'Aquila (Italy earthquake

    Directory of Open Access Journals (Sweden)

    H. Kaplan

    2010-03-01

    Full Text Available On 6 April 2009 an earthquake of magnitude 6.3 occurred in L'Aquila city, Italy. In the city center and surrounding villages many masonry and reinforced concrete (RC buildings were heavily damaged or collapsed. After the earthquake, the inspection carried out in the region provided relevant results concerning the quality of the materials, method of construction and the performance of the structures. The region was initially inhabited in the 13th century and has many historic structures. The main structural materials are unreinforced masonry (URM composed of rubble stone, brick, and hollow clay tile. Masonry units suffered the worst damage. Wood flooring systems and corrugated steel roofs are common in URM buildings. Moreover, unconfined gable walls, excessive wall thicknesses without connection with each other are among the most common deficiencies of poorly constructed masonry structures. These walls caused an increase in earthquake loads. The quality of the materials and the construction were not in accordance with the standards. On the other hand, several modern, non-ductile concrete frame buildings have collapsed. Poor concrete quality and poor reinforcement detailing caused damage in reinforced concrete structures. Furthermore, many structural deficiencies such as non-ductile detailing, strong beams-weak columns and were commonly observed. In this paper, reasons why the buildings were damaged in the 6 April 2009 earthquake in L'Aquila, Italy are given. Some suggestions are made to prevent such disasters in the future.

  18. The L'Aquila trial

    Science.gov (United States)

    Amato, Alessandro; Cocco, Massimo; Cultrera, Giovanna; Galadini, Fabrizio; Margheriti, Lucia; Nostro, Concetta; Pantosti, Daniela

    2013-04-01

    The first step of the trial in L'Aquila (Italy) ended with a conviction of a group of seven experts to 6 years of jail and several million euros refund for the families of the people who died during the Mw 6.3 earthquake on April 6, 2009. This verdict has a tremendous impact on the scientific community as well as on the way in which scientists deliver their expert opinions to decision makers and society. In this presentation, we describe the role of scientists in charge of releasing authoritative information concerning earthquakes and seismic hazard and the conditions that led to the verdict, in order to discuss whether this trial represented a prosecution to science, and if errors were made in communicating the risk. Documents, articles and comments about the trial are collected in the web site http://processoaquila.wordpress.com/. We will first summarize what was the knowledge about the seismic hazard of the region and the vulnerability of L'Aquila before the meeting of the National Commission for Forecasting and Predicting Great Risks (CGR) held 6 days before the main shock. The basic point of the accusation is that the CGR suggested that no strong earthquake would have occurred (which of course was never mentioned by any seismologist participating to the meeting). This message would have convinced the victims to stay at home, instead of moving out after the M3.9 and M3.5 earthquakes few hours before the mainshock. We will describe how the available scientific information was passed to the national and local authorities, and in general how the Italian scientific Institution in charge of seismic monitoring and research (INGV), the Civil Protection Department (DPC) and the CGR should interact according to the law. As far as the communication and outreach to the public, the scientific Institutions as INGV have the duty to communicate scientific information. Instead, the risk management and the definition of actions for risk reduction is in charge of Civil

  19. Science, Right and Communication of Risk in L'Aquila trial

    Science.gov (United States)

    Altamura, Marco; Miozzo, Davide; Boni, Giorgio; Amato, Davide; Ferraris, Luca; Siccardi, Franco

    2013-04-01

    CIMA Research Foundation has had access to all the information of the criminal trial held in l'Aquila intended against some of the members of the Commissione Nazionale Grandi Rischi (National Commission for Forecasting and Preventing Major Risks) and some directors of the Italian Civil Protection Department. These information constitute the base of a study that has examined: - the initiation of investigations by the families of the victims; - the public prosecutor's indictment; - the testimonies; - the liaison between experts in seismology social scientists and communication; - the statement of the defence; - the first instance decision of condemnation. The study reveals the paramount importance of communication of risk as element of prevention. Taken into particular account is the method of the Judicial Authority ex-post control on the evaluations and decisions of persons with a role of decision maker within the Civil Protection system. In the judgment just published by the Court of l'Aquila, the reassuring information from scientists and operators of Civil Protection appears to be considered as a negative value.

  20. The dynamic behaviour of the mammoth in the Spanish fortress, L’Aquila, Italy

    Directory of Open Access Journals (Sweden)

    Casarin Filippo

    2015-01-01

    Full Text Available The fossil remains of a “Mammuthus Meridionalis” were found the 25th of march 1954 in a lime quarry close to the city of L’Aquila. The Mammoth skeleton was soon “reconstructed” on a forged iron frame, and it was located in one of the main halls of the Spanish fortress in L’Aquila. A comprehensive restoration was recently completed (2013-2015, also considering the study of the adequacy of the supporting frame, which demonstrated to survive the relevant 2009 l’Aquila earthquake. After a laser-scanner survey, allowing to build a very detailed Finite Element model, Operational Modal Analysis was employed in order to obtain the dynamic identification of the structure. Results of the experimental activities explained the capacity of the structure to bear the 2009 main shock, since the natural frequencies demonstrated to be quite reduced. The structure acted as a “natural” seismic device, avoiding to reach its Ultimate Limit State however paying the toll of relevant displacements. The seismic motion caused several cracks at the edge of the bones, indicating the non-fulfilment of the ALS (damage Limit State of Artistic contents. A proposal for seismic isolation and redesign of the supporting frame was then discussed. The paper illustrates the scientific activities assisting the restoration intervention, entailing a multidisciplinary approach, in the fields of restoration, palaeontology and seismic engineering.

  1. Ultraviolet Spectroscopic Study of BY Circini and V 1425 Aquilae ...

    Indian Academy of Sciences (India)

    ferent authors (Woudt & Warner 2003; Bateson & McIntosh 1998; Johnson et al. 1997; Greeley et al. 1995; Evans & Yudin 1995; Cooper et al. 1995; Gilmore et al. 1995; Liller et al. 1995). Nova Aquilae 1995 was discovered on 1995 February 7 by Nakano et al. (1995), with an orbital period of 6.14 h (Retter et al. 1998a) at a ...

  2. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    KRSTIVOJEVIC, J. P.

    2015-08-01

    Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain.

  3. FiSH: put fault data in a seismic hazard basket

    Science.gov (United States)

    Pace, Bruno; Visini, Francesco; Peruzza, Laura

    2016-04-01

    The practice of using fault sources in seismic hazard studies is growing in popularity, including in regions with moderate seismic activity, such as the European countries. In these areas, fault identification may be affected by similarly large uncertainties in the historical and instrumental seismic histories of more active areas that have not been inhabited for long periods of time. Certain studies have effectively applied a time-dependent perspective to combine historical and instrumental seismic data with geological and paleoseismological information, partially compensating for a lack of information. We present a package of Matlab® tools (called FiSH), in publication on Seismological Research Letters, designed to help seismic hazard modellers analyse fault data. These tools enable the derivation of expected earthquake rates given common fault data, and allow you to test the consistency between the magnitude frequency distributions assigned to a fault and some available observations. The basic assumption of FiSH is that the geometric and kinematic features of a fault are the expression of its seismogenic potential. Three tools have been designed to integrate the variable levels of information available: (a) the first tool allows users to convert fault geometry and slip rates into a global budget of the seismic moment released in a given time frame, taking uncertainties into account; (b) the second tool computes the recurrence parameters and associated uncertainties from historical and/or paleoseismological data; 
(c) the third tool outputs time-independent or time-dependent earthquake rates for different magnitude frequency distribution models. We present moreover a test case to illustrate the capabilities of FiSH, on the Paganica normal fault in Central Italy that ruptured during the L'Aquila 2009 earthquake sequence (mainshock Mw 6.3). FiSH is available at http://fish-code.com, and the source codes are open. We encourage users to handle the scripts

  4. Lessons Learned from L'Aquila Trial for Scientists' Communication

    Science.gov (United States)

    Koketsu, K.; Cerase, A.; Amato, A.; Oki, S.

    2017-12-01

    The Appeal and Supreme Courts of Italy concluded that there was no bad communication by defendants except for the "glass of wine interview" which was made by a government official before the scientists' meeting. This meeting was held 6 days before the 2009 L'Aquila earthquake to discuss the outlook for the seismic activity in the L'Aquila area. However, at least two TV stations and a newspaper reported the content of the "glass of wine interview" in the next morning as it was announced by the defendant scientists. The reports triggered a domino effect of misinterpretations, which may be well acknowledged in the light of the social amplification of risk framework. These TV stations and newspaper should be also considered responsible for the bad communication. This point was missing in the sentence documents by the Appeal and Supreme Courts. Therefore, for scientists, a lesson of communication, especially during a seismic hazard crisis, is that they must carefully craft their messages and the way they circulate, both in broadcast and digital media, and follow reports released by the media on their activities. As another lesson, scientists must be aware that key concepts of safety such as "no danger" and "favorable situation", which were used in the "glass of wine interview", and the idea of probability can have different meanings for scientists, media, and citizens.

  5. The Academic Impact of Natural Disasters: Evidence from L'Aquila Earthquake

    Science.gov (United States)

    Di Pietro, Giorgio

    2018-01-01

    This paper uses a standard difference-in-differences approach to examine the effect of the L'Aquila earthquake on the academic performance of the students of the local university. The empirical results indicate that this natural disaster reduced students' probability of graduating on-time and slightly increased students' probability of dropping…

  6. [Medium- and long-term health effects of the L'Aquila earthquake (Central Italy, 2009) and of other earthquakes in high-income Countries: a systematic review].

    Science.gov (United States)

    Ripoll Gallardo, Alba; Alesina, Marta; Pacelli, Barbara; Serrone, Dario; Iacutone, Giovanni; Faggiano, Fabrizio; Della Corte, Francesco; Allara, Elias

    2016-01-01

    to compare the methodological characteristics of the studies investigating the middle- and long-term health effects of the L'Aquila earthquake with the features of studies conducted after other earthquakes occurred in highincome Countries. a systematic comparison between the studies which evaluated the health effects of the L'Aquila earthquake (Central Italy, 6th April 2009) and those conducted after other earthquakes occurred in comparable settings. Medline, Scopus, and 6 sources of grey literature were systematically searched. Inclusion criteria comprised measurement of health outcomes at least one month after the earthquake, investigation of earthquakes occurred in high-income Countries, and presence of at least one temporal or geographical control group. out of 2,976 titles, 13 studies regarding the L'Aquila earthquake and 51 studies concerning other earthquakes were included. The L'Aquila and the Kobe/Hanshin- Awaji (Japan, 17th January 1995) earthquakes were the most investigated. Studies on the L'Aquila earthquake had a median sample size of 1,240 subjects, a median duration of 24 months, and used most frequently a cross sectional design (7/13). Studies on other earthquakes had a median sample size of 320 subjects, a median duration of 15 months, and used most frequently a time series design (19/51). the L'Aquila studies often focussed on mental health, while the earthquake effects on mortality, cardiovascular outcomes, and health systems were less frequently evaluated. A more intensive use of routine data could benefit future epidemiological surveillance in the aftermath of earthquakes.

  7. Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro

    2015-01-01

    Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis

  8. PREP KITT, System Reliability by Fault Tree Analysis. PREP, Min Path Set and Min Cut Set for Fault Tree Analysis, Monte-Carlo Method. KITT, Component and System Reliability Information from Kinetic Fault Tree Theory

    International Nuclear Information System (INIS)

    Vesely, W.E.; Narum, R.E.

    1997-01-01

    1 - Description of problem or function: The PREP/KITT computer program package obtains system reliability information from a system fault tree. The PREP program finds the minimal cut sets and/or the minimal path sets of the system fault tree. (A minimal cut set is a smallest set of components such that if all the components are simultaneously failed the system is failed. A minimal path set is a smallest set of components such that if all of the components are simultaneously functioning the system is functioning.) The KITT programs determine reliability information for the components of each minimal cut or path set, for each minimal cut or path set, and for the system. Exact, time-dependent reliability information is determined for each component and for each minimal cut set or path set. For the system, reliability results are obtained by upper bound approximations or by a bracketing procedure in which various upper and lower bounds may be obtained as close to one another as desired. The KITT programs can handle independent components which are non-repairable or which have a constant repair time. Any assortment of non-repairable components and components having constant repair times can be considered. Any inhibit conditions having constant probabilities of occurrence can be handled. The failure intensity of each component is assumed to be constant with respect to time. The KITT2 program can also handle components which during different time intervals, called phases, may have different reliability properties. 2 - Method of solution: The PREP program obtains minimal cut sets by either direct deterministic testing or by an efficient Monte Carlo algorithm. The minimal path sets are obtained using the Monte Carlo algorithm. The reliability information is obtained by the KITT programs from numerical solution of the simple integral balance equations of kinetic tree theory. 3 - Restrictions on the complexity of the problem: The PREP program will obtain the minimal cut and

  9. The earthquake lights (EQL of the 6 April 2009 Aquila earthquake, in Central Italy

    Directory of Open Access Journals (Sweden)

    C. Fidani

    2010-05-01

    Full Text Available A seven-month collection of testimonials about the 6 April 2009 earthquake in Aquila, Abruzzo region, Italy, was compiled into a catalogue of non-seismic phenomena. Luminous phenomena were often reported starting about nine months before the strong shock and continued until about five months after the shock. A summary and list of the characteristics of these sightings was made according to 20th century classifications and a comparison was made with the Galli outcomes. These sightings were distributed over a large area around the city of Aquila, with a major extension to the north, up to 50 km. Various earthquake lights were correlated with several landscape characteristics and the source and dynamic of the earthquake. Some preliminary considerations on the location of the sightings suggest a correlation between electrical discharges and asperities, while flames were mostly seen along the Aterno Valley.

  10. Geomechanical analysis of excavation-induced rock mass behavior of faulted Opalinus clay at the Mont Terri underground rock laboratory (Switzerland)

    International Nuclear Information System (INIS)

    Thoeny, R.

    2014-01-01

    Clay rock formations are potential host rocks for deep geological disposal of nuclear waste. However, they exhibit relatively low strength and brittle failure behaviour. Construction of underground openings in clay rocks may lead to the formation of an excavation damage zone (EDZ) in the near-field area of the tunnel. This has to be taken into account during risk assessment for waste-disposal facilities. To investigate the geomechanical processes associated with the rock mass response of faulted Opalinus Clay during tunnelling, a full-scale ‘mine-by’ experiment was carried out at the Mont Terri Underground Rock Laboratory (URL) in Switzerland. In the ‘mine-by’ experiment, fracture network characteristics within the experimental section were characterized prior to and after excavation by integrating structural data from geological mapping of the excavation surfaces and from four pre- and post-excavation boreholes.The displacements and deformations in the surrounding rock mass were measured using geo-technical instrumentation including borehole inclinometers, extensometers and deflectometers, together with high-resolution geodetic displacement measurements and laser scanning measurements on the excavation surfaces. Complementary data was gathered from structural and geophysical characterization of the surrounding rock mass. Geological and geophysical techniques were used to analyse the structural and kinematic relationships between the natural and excavation-induced fracture network surrounding the ‘mine-by’ experiment. Integrating the results from seismic refraction tomography, borehole logging, and tunnel surface mapping revealed that spatial variations in fault frequency along the tunnel axis alter the rock mass deformability and strength. Failure mechanisms, orientation and frequency of excavation-induced fractures are significantly influenced by tectonic faults. On the side walls, extensional fracturing tangential to the tunnel circumference was the

  11. Geomechanical analysis of excavation-induced rock mass behavior of faulted Opalinus clay at the Mont Terri underground rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Thoeny, R.

    2014-07-01

    Clay rock formations are potential host rocks for deep geological disposal of nuclear waste. However, they exhibit relatively low strength and brittle failure behaviour. Construction of underground openings in clay rocks may lead to the formation of an excavation damage zone (EDZ) in the near-field area of the tunnel. This has to be taken into account during risk assessment for waste-disposal facilities. To investigate the geomechanical processes associated with the rock mass response of faulted Opalinus Clay during tunnelling, a full-scale ‘mine-by’ experiment was carried out at the Mont Terri Underground Rock Laboratory (URL) in Switzerland. In the ‘mine-by’ experiment, fracture network characteristics within the experimental section were characterized prior to and after excavation by integrating structural data from geological mapping of the excavation surfaces and from four pre- and post-excavation boreholes.The displacements and deformations in the surrounding rock mass were measured using geo-technical instrumentation including borehole inclinometers, extensometers and deflectometers, together with high-resolution geodetic displacement measurements and laser scanning measurements on the excavation surfaces. Complementary data was gathered from structural and geophysical characterization of the surrounding rock mass. Geological and geophysical techniques were used to analyse the structural and kinematic relationships between the natural and excavation-induced fracture network surrounding the ‘mine-by’ experiment. Integrating the results from seismic refraction tomography, borehole logging, and tunnel surface mapping revealed that spatial variations in fault frequency along the tunnel axis alter the rock mass deformability and strength. Failure mechanisms, orientation and frequency of excavation-induced fractures are significantly influenced by tectonic faults. On the side walls, extensional fracturing tangential to the tunnel circumference was the

  12. [Between perception and reality: towards an assessment of socio-territorial discomfort in L'Aquila (Central Italy) after the earthquak].

    Science.gov (United States)

    Calandra, Lina Maria

    2016-01-01

    to consider perceptions and narratives of the inhabitants of L'Aquila about their context of life, in order to point out what kind of relationship is present in L'Aquila, between the territory and its inhabitants after the earthquake; to evaluate how and where symptomatic attitudes about a widespread discomfort in social interactions have been generalized. since 2010, the joint work by the research team of the "Cartolab" laboratory and pedagogy area (Department of Human Studies, University of L'Aquila) has developed and applied a participatory research methodology. This methodology is both an inquiry used by experts to increase the participation of people who experience the everyday life in L'Aquila, and a tool to draw moral, ethical and political considerations in order to activate change in social and political dynamics at the urban scale. During 2013, the methodology of Participatory- Participating Research Action (PPRA) was implemented through cycles of territorial meetings involving citizens and municipal administrators. These meetings have been promoted and organized with the Office of Participation of the Municipality of L'Aquila. the PPRA aimed to assess: 1. the social, political and economic quality of the territory evaluated by people involved in the survey, with reference to life conditions, living context, and future projections of self and of the territory; 2. the perception of the security. Through a qualitative/quantitative approach, the data collected through questionnaire and public meetings have involved 309 young (16-30 years old) and 227 adults (31-85 years old) for the first aspect, and 314 citizens (16-80 years old) for the second aspect, respectively. the results highlight a socioterritorial discomfort emerging in L'Aquila for a relevant part of the population. This discomfort is shaped by a negative rating on life conditions and context: adults provide poor quality evaluations about the present and cannot figure out some kind of vision for

  13. "3D_Fault_Offsets," a Matlab Code to Automatically Measure Lateral and Vertical Fault Offsets in Topographic Data: Application to San Andreas, Owens Valley, and Hope Faults

    Science.gov (United States)

    Stewart, N.; Gaudemer, Y.; Manighetti, I.; Serreau, L.; Vincendeau, A.; Dominguez, S.; Mattéo, L.; Malavieille, J.

    2018-01-01

    Measuring fault offsets preserved at the ground surface is of primary importance to recover earthquake and long-term slip distributions and understand fault mechanics. The recent explosion of high-resolution topographic data, such as Lidar and photogrammetric digital elevation models, offers an unprecedented opportunity to measure dense collections of fault offsets. We have developed a new Matlab code, 3D_Fault_Offsets, to automate these measurements. In topographic data, 3D_Fault_Offsets mathematically identifies and represents nine of the most prominent geometric characteristics of common sublinear markers along faults (especially strike slip) in 3-D, such as the streambed (minimum elevation), top, free face and base of channel banks or scarps (minimum Laplacian, maximum gradient, and maximum Laplacian), and ridges (maximum elevation). By calculating best fit lines through the nine point clouds on either side of the fault, the code computes the lateral and vertical offsets between the piercing points of these lines onto the fault plane, providing nine lateral and nine vertical offset measures per marker. Through a Monte Carlo approach, the code calculates the total uncertainty on each offset. It then provides tools to statistically analyze the dense collection of measures and to reconstruct the prefaulted marker geometry in the horizontal and vertical planes. We applied 3D_Fault_Offsets to remeasure previously published offsets across 88 markers on the San Andreas, Owens Valley, and Hope faults. We obtained 5,454 lateral and vertical offset measures. These automatic measures compare well to prior ones, field and remote, while their rich record provides new insights on the preservation of fault displacements in the morphology.

  14. Time-dependent methodology for fault tree evaluation

    International Nuclear Information System (INIS)

    Vesely, W.B.

    1976-01-01

    Any fault tree may be evaluated applying the method called the kinetic theory of fault trees. The basic feature of this method as presented here is in that any information on primary failure, type failure or peak failure is derived from three characteristics: probability of existence, failure intensity and failure density. The determination of the said three characteristics for a given phenomenon yields the remaining probabilistic information on the individual aspects of the failure and on their totality for the whole observed period. The probabilistic characteristics are determined by applying the analysis of phenomenon probability. The total time dependent information on the peak failure is obtained by using the type failures (critical paths) of the fault tree. By applying the said process the total time dependent information is obtained for every primary failure and type failure of the fault tree. In the application of the method of the kinetic theory of fault trees represented by the PREP and KITT programmes, the type failures are first obtained using the deterministic testing method or using the Monte Carlo simulation (PREP programme). The respective characteristics are then determined using the kinetic theory of fault trees (KITT programmes). (Oy)

  15. Imaging the Crust in the Northern Sector of the 2009 L'Aquila Seismic Sequence through Oil Exploration Data Interpretation

    Science.gov (United States)

    Grazia Ciaccio, Maria; Improta, Luigi; Patacca, Etta; Scandone, Paolo; Villani, Fabio

    2010-05-01

    The 2009 L'Aquila seismic sequence activated a complex, about 40 km long, NW-trending and SW-dipping normal fault system, consisting of three main faults arranged in right-lateral en-echelon geometry. While the northern sector of the epicentral area was extensively investigated by oil companies, only a few scattered, poor-quality commercial seismic profiles are available in the central and southern sector. In this study we interpret subsurface commercial data from the northern sector, which is the area where is located the source of the strong Mw5.4 aftershock occurred on the 9th April 2009. Our primary goals are: (1) to define a reliable framework of the upper crust structure, (2) to investigate how the intense aftershock activity, the bulk of which is clustered in the 5-10 km depth range, relates to the Quaternary extensional faults present in the area. The investigated area lies between the western termination of the W-E trending Gran Sasso thrust system to the south, the SW-NE trending Mt. Sibillini thrust front (Ancona-Anzio Line Auctt.) to the north and west, and by the NNW-SSE trending, SW-dipping Mt. Gorzano normal fault to the east. In this area only middle-upper Miocene deposits are exposed (Laga Flysch and underlying Cerrogna Marl), but commercial wells have revealed the presence of a Triassic-Miocene sedimentary succession identical to the well known Umbria-Marche stratigraphic sequence. We have analyzed several confidential seismic reflection profiles, mostly provided by ENI oil company. Seismic lines are tied to two public wells, 5766 m and 2541 m deep. Quality of the reflection imaging is highly variable. A few good quality stack sections contain interpretable signal down to 4.5-5.5 s TWT, corresponding to depths exceeding 10-12 km and thus allowing crustal imaging at seismogenic depths. Key-reflectors for the interpretation correspond to: (1) the top of the Miocene Cerrogna marls, (2) the top of the Upper Albian-Oligocene Scaglia Group, (3) the

  16. Purires and Picagres faults and its relationship with the 1990 Puriscal seismic sequence

    International Nuclear Information System (INIS)

    Montero, Walter; Rojas, Wilfredo

    2014-01-01

    The system of active faults in the region between the southern flank of the Montes del Aguacate and the northwest flank of the Talamanca mountain range was re-evaluated and defined in relation to the seismic activity that occurred between the end of March 1990 and the beginning of 1991. Aerial photographs of different scales of the Instituto Geografico Nacional de Costa Rica, aerial photographs of scale 1: 40000 of the TERRA project, of the Centro Nacional Geoambiental and infrared photos of scale 1: 40000 of the Mission CARTA 2003, of the Programa Nacional de Investigaciones Aerotransportadas y Sensores Remotos (PRIAS) were reviewed. Morphotectonic, structural and geological information related to the various faults was obtained with field work. A set of faults within the study area were determined with the neotectonic investigation. Several of these faults continue outside the zone both to the northwest within the Montes del Aguacate, and to the southeast to the NW foothills of the Cordillera de Talamanca. The superficial focus seismicity (<20 km), which occurred in the Puriscal area during 1990, was revised from previous studies, whose base information comes from the Red Sismologica Nacional (RSN, UCR-ICE). The relationship between the superficial seismic sequence and the defined faults was determined, allowing to conclude that the main seismic sources that originated the seismicity were the Purires and Picagres faults. A minor seismicity was related to the faults Jaris, Bajos de Jorco, Zapote and Junquillo [es

  17. Deformation mechanisms and evolution of the microstructure of gouge in the Main Fault in Opalinus Clay in the Mont Terri rock laboratory (CH)

    Science.gov (United States)

    Laurich, Ben; Urai, Janos L.; Vollmer, Christian; Nussbaum, Christophe

    2018-01-01

    We studied gouge from an upper-crustal, low-offset reverse fault in slightly overconsolidated claystone in the Mont Terri rock laboratory (Switzerland). The laboratory is designed to evaluate the suitability of the Opalinus Clay formation (OPA) to host a repository for radioactive waste. The gouge occurs in thin bands and lenses in the fault zone; it is darker in color and less fissile than the surrounding rock. It shows a matrix-based, P-foliated microfabric bordered and truncated by micrometer-thin shear zones consisting of aligned clay grains, as shown with broad-ion-beam scanning electron microscopy (BIB-SEM) and optical microscopy. Selected area electron diffraction based on transmission electron microscopy (TEM) shows evidence for randomly oriented nanometer-sized clay particles in the gouge matrix, surrounding larger elongated phyllosilicates with a strict P foliation. For the first time for the OPA, we report the occurrence of amorphous SiO2 grains within the gouge. Gouge has lower SEM-visible porosity and almost no calcite grains compared to the undeformed OPA. We present two hypotheses to explain the origin of gouge in the Main Fault: (i) authigenic generation consisting of fluid-mediated removal of calcite from the deforming OPA during shearing and (ii) clay smear consisting of mechanical smearing of calcite-poor (yet to be identified) source layers into the fault zone. Based on our data we prefer the first or a combination of both, but more work is needed to resolve this. Microstructures indicate a range of deformation mechanisms including solution-precipitation processes and a gouge that is weaker than the OPA because of the lower fraction of hard grains. For gouge, we infer a more rate-dependent frictional rheology than suggested from laboratory experiments on the undeformed OPA.

  18. [Resilience, social relations, and pedagogic intervention five years after the earthquake occurred in L'Aquila (Central Italy) in 2009: an action-research in the primary schools].

    Science.gov (United States)

    Vaccarelli, Alessandro; Ciccozzi, Chiara; Fiorenza, Arianna

    2016-01-01

    the action-research "Outdoor training and citizenship between children from L'Aquila", carried out from 2014 to 2015 in some schools situated in the municipality of L'Aquila, aimed to answer to the needs emerged in reference to the social and psychological problems among children during the period after the L'Aquila earthquake occurred in 2009. In particular, the article provides documentary evidence about the results regarding the parts related to the study of resilience (cognitive objective) and of social relations (objective tied to the educational intervention), five years after the earthquake. the pedagogical research team, in close cooperation with the Cartography Laboratory of the University of L'Aquila and with the Grupo de Innovación Educativa Areté de la Universidad Politécnica de Madrid, has worked according to the research-action methodology, collecting secondary data and useful data to check the effectiveness of the educational actions put in place in order to promote resilient behaviours and to activate positive group dynamics. the study has been developed in 4 primary schools of the L'Aquila and has involved 83 children from 8 to 12 years. A control group made by 55 subjects, homogeneous for sex and age, has been identified in the primary schools of Borgorose, a little town near Rieti (Central Italy). data about the abilities of resilience and about the response to the stress have been collected in the first phase of the study with the purpose to outline the initial situation and develop an appropriate educational intervention. The comparison with the control group made by 55 subjects who were not from L'Aquila allowed to check that, 5 years after the disaster, the context of life produces a meaningful discrepancy in terms of responses to the stress and to the ability of resilience, and this fact is definitely negative for children from L'Aquila. On the other hand, data related to social relations allowed to verify how the educational intervention

  19. L'Aquila 1962. "Alternative Attuali" e l'idea di "mostra-saggio"

    Directory of Open Access Journals (Sweden)

    Nicoletti, Luca Pietro

    2015-10-01

    Full Text Available Nel 1962 Enrico Crispolti inaugura, presso il Forte Cinquecentesco dell'Aquila, la prima edizione della rassegna "Alternative Attuali", proponendo un nuovo modello di mostra collettiva: non una semplice rassegna di artisti diversi, ma la proposta di un dialogo fra diverse posizioni (le "alternative" volte al superamento dell'Informale. Con il modello della "mostra-saggio", arricchita da un dibattito in catalogo, veniva per la prima volta applicata una idea espositiva che mettesse in prospettiva critica (e in proiezione storica la situazione presente.

  20. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA

    Directory of Open Access Journals (Sweden)

    Wonhee Lee

    2014-02-01

    Full Text Available A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT. Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU.

  1. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA

    Science.gov (United States)

    Lee, Wonhee; Park, Chan Gook

    2014-01-01

    A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA) with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT). Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU). PMID:24556675

  2. Geosphere coupling and hydrothermal anomalies before the 2009 Mw 6.3 L'Aquila earthquake in Italy

    Directory of Open Access Journals (Sweden)

    L. Wu

    2016-08-01

    LCAC mode was proposed to interpret the possible mechanisms of the multiple quasi-synchronous anomalies preceding the L'Aquila earthquake. Results indicate that CO2-rich fluids in deep crust might have played a significant role in the local LCAC process.

  3. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  4. Measuring a Truncated Disk in Aquila X-1

    Science.gov (United States)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix; hide

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation.

  5. Burnout among healthcare workers at L'Aquila: its prevalence and associated factors.

    Science.gov (United States)

    Mattei, Antonella; Fiasca, Fabiana; Mazzei, Mariachiara; Abbossida, Vincenzo; Bianchini, Valeria

    2017-12-01

    Burnout, which is now recognized as a real problem in terms of its negative outcome on healthcare efficiency, is a stress condition that can be increased by exposure to natural disasters, such as the 2009 L'Aquila earthquake. This study aims to evaluate burnout syndrome, its associated risk factors and stress levels, and the individual coping strategies among healthcare professionals at L'Aquila General Hospital. A cross-sectional study of 190 healthcare workers was conducted. There was a questionnaire for the collection of the socio-demographic, occupational and anamnestic data, and the Maslach Burnout Inventory, the General Health Questionnaire-12 items (GHQ-12) and the Brief COPE were used. The burnout dimensions showed high scores in Emotional Exhaustion (38.95%), in Depersonalization (23.68%) and in lack of Personal Accomplishment (23.16%), along with the presence of moderate to high levels of distress (54.21%). In addition to factors already known to be associated with burnout (job perception and high levels of distress) exposure to an earthquake emerged as a factor independently associated with the syndrome. Adaptive coping strategies such as religiosity showed a significant and negative relationship with burnout. Our research highlights the need for interventions directed at a reduction in workload and work-stressors and an improvement of adaptive coping strategies, especially in a post-disaster workplace.

  6. Blog, social network e strategie narrative di resistenza nel post-terremoto dell’Aquila

    Directory of Open Access Journals (Sweden)

    Massimo Giuliani

    2013-06-01

    Full Text Available An earthquake, when it attacks a territory and its history, strikes the collective memory of a population. The reconstruction that followed the earthquake of L'Aquila contained elements that were equally problematic for the cohesion of the social and relational network, and therefore for mental health of stricken citizens and territories. Blogs, Facebook and online communication have been a possibility for the population (not only for the age group that usually makes use of digital technologies to share informations, chronicles, narrations about the postearthquake and to tell a story that was different from the sweetened one told by media. While trying a way to process loss and their own discomfort, many authors and bloggers became datum points for a community of readers thanks to their stories and chronicles. In addition, for the active citizenry the virtual setting of the Web became a substitute to the physical agora that had been wiped out by the earthquake and the dispersion of the community all over national territory. The experience of L'Aquila, also recreated through conversations with bloggers and the observation of Facebook status updates, shows interesting elements in order to understand the role of new media in everyday life and in an exceptional context such as collective trauma

  7. Seismic Supercycles of Normal Faults in Central Italy over Various Time Scales Revealed by 36Cl Cosmogenic Dating

    Science.gov (United States)

    Benedetti, L. C.; Tesson, J.; Perouse, E.; Puliti, I.; Fleury, J.; Rizza, M.; Billant, J.; Pace, B.

    2017-12-01

    The use of 36Cl cosmogenic nuclide as a paleoseismological tool for normal faults in the Mediterranean has revolutionized our understanding of their seismic cycle (Gran Mitchell et al. 2001, Benedetti et al. 2002). Here we synthetized results obtained on 13 faults in Central Italy. Those records cover a period of 8 to 45 ka. The mean recurrence time of retrieved seismic events is 5.5 ±6 ka, with a mean slip per event of 2.5 ± 1.8 m and a mean slip-rate from 0.1 to 2.4 mm/yr. Most retrieved events correspond to single events according to scaling relationships. This is also supported by the 2 m-high co-seismic slip observed on the Mt Vettore fault after the October 30, 2016 M6.5 earthquake in Central Italy (EMERGEO working group). Our results suggest that all faults have experienced one or several periods of slip acceleration with bursts of seismic activity, associated with very high slip-rate of 1.7-9 mm/yr, corresponding to 2-20 times their long-term slip-rate. The duration of those bursts is variable from a fault to another (from recurrence time. This might suggest that the seismic activity of those faults could be controlled by their intrinsic properties (e.g. long-term slip-rate, fault-length, state of structural maturity). Our results also show events clustering with several faults rupturing in less than 500 yrs on adjacent or distant faults within the studied area. The Norcia-Amatrice seismic sequence, ≈ 50 km north of our study area, also evidenced this clustering behaviour, with over the last 20 yrs several successive events of Mw 5 to 6.5 (from north to south: Colfiorito 1997 Mw6.0, Norcia 2016 Mw6.5, L'Aquila 2009 Mw6.3), rupturing various fault systems, over a total length of ≈100 km. This sequence will allow to better understand earthquake kinematics and spatiotemporal slip distribution during those seismic bursts.

  8. Fault tree analysis. Implementation of the WAM-codes

    International Nuclear Information System (INIS)

    Bento, J.P.; Poern, K.

    1979-07-01

    The report describes work going on at Studsvik at the implementation of the WAM code package for fault tree analysis. These codes originally developed under EPRI contract by Sciences Applications Inc, allow, in contrast with other fault tree codes, all Boolean operations, thus allowing modeling of ''NOT'' conditions and dependent components. To concretize the implementation of these codes, the auxiliary feed-water system of the Swedish BWR Oskarshamn 2 was chosen for the reliability analysis. For this system, both the mean unavailability and the probability density function of the top event - undesired event - of the system fault tree were calculated, the latter using a Monte-Carlo simulation technique. The present study is the first part of a work performed under contract with the Swedish Nuclear Power Inspectorate. (author)

  9. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  10. Strong foreshock signal preceding the L'Aquila (Italy earthquake (Mw 6.3 of 6 April 2009

    Directory of Open Access Journals (Sweden)

    G. Minadakis

    2010-01-01

    Full Text Available We used the earthquake catalogue of INGV extending from 1 January 2006 to 30 June 2009 to detect significant changes before and after the 6 April 2009 L'Aquila mainshock (Mw=6.3 in the seismicity rate, r (events/day, and in b-value. The statistical z-test and Utsu-test were applied to identify significant changes. From the beginning of 2006 up to the end of October 2008 the activity was relatively stable and remained in the state of background seismicity (r=1.14, b=1.09. From 28 October 2008 up to 26 March 2009, r increased significantly to 2.52 indicating weak foreshock sequence; the b-value did not changed significantly. The weak foreshock sequence was spatially distributed within the entire seismogenic area. In the last 10 days before the mainshock, strong foreshock signal became evident in space (dense epicenter concentration in the hanging-wall of the Paganica fault, in time (drastic increase of r to 21.70 events/day and in size (b-value dropped significantly to 0.68. The significantly high seismicity rate and the low b-value in the entire foreshock sequence make a substantial difference from the background seismicity. Also, the b-value of the strong foreshock stage (last 10 days before mainshock was significantly lower than that in the aftershock sequence. Our results indicate the important value of the foreshock sequences for the prediction of the mainshock.

  11. The 2016-2017 central Italy coseismic surface ruptures and their meaning with respect to foreseen active fault systems segmentation

    Science.gov (United States)

    De Martini, P. M.; Pucci, S.; Villani, F.; Civico, R.; Del Rio, L.; Cinti, F. R.; Pantosti, D.

    2017-12-01

    In 2016-2017 a series of moderate to large normal faulting earthquakes struck central Italy producing severe damage in many towns including Amatrice, Norcia and Visso and resulting in 299 casualties and >20,000 homeless. The complex seismic sequence depicts a multiple activation of the Mt. Vettore-Mt. Bove (VBFS) and the Laga Mts. fault systems, which were considered in literature as independent segments characterizing a recent seismic gap in the region comprised between two modern seismic sequences: the 1997-1998 Colfiorito and the 2009 L'Aquila. We mapped in detail the coseismic surface ruptures following three mainshocks (Mw 6.0 on 24th August, Mw 5.9 and Mw 6.5 on 26th and 30th October, 2016, respectively). Primary surface ruptures were observed and recorded for a total length of 5.2 km, ≅10 km and ≅25 km, respectively, along closely-spaced, parallel or subparallel, overlapping or step-like synthetic and antithetic fault splays of the activated fault systems, in some cases rupturing repeatedly the same location. Some coseismic ruptures were mapped also along the Norcia Fault System, paralleling the VBFS about 10 km westward. We recorded geometric and kinematic characteristics of the normal faulting ruptures with an unprecedented detail thanks to almost 11,000 oblique photographs taken from helicopter flights soon after the mainshocks, verified and integrated with field data (more than 7000 measurements). We analyze the along-strike coseismic slip and slip vectors distribution to be observed in the context of the geomorphic expression of the disrupted slopes and their depositional and erosive processes. Moreover, we constructed 1:10.000 scale geologic cross-sections based on updated maps, and we reconstructed the net offset distribution of the activated fault system to be compared with the morphologic throws and to test a cause-effect relationship between faulting and first-order landforms. We provide a reconstruction of the 2016 coseismic rupture pattern as

  12. Orion GN&C Fault Management System Verification: Scope And Methodology

    Science.gov (United States)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  13. Mortality in the l'aquila (central Italy) earthquake of 6 april 2009.

    Science.gov (United States)

    Alexander, David; Magni, Michele

    2013-01-07

    This paper presents the results of an analysis of data on mortality in the magnitude 6.3 earthquake that struck the central Italian city and province of L'Aquila during the night of 6 April 2009. The aim is to create a profile of the deaths in terms of age, gender, location, behaviour during the tremors, and other aspects. This could help predict the pattern of casualties and priorities for protection in future earthquakes. To establish a basis for analysis, the literature on seismic mortality is surveyed. The conclusions of previous studies are synthesised regarding patterns of mortality, entrapment, survival times, self-protective behaviour, gender and age. These factors are investigated for the data set covering the 308 fatalities in the L'Aquila earthquake, with help from interview data on behavioural factors obtained from 250 survivors. In this data set, there is a strong bias towards victimisation of young people, the elderly and women. Part of this can be explained by geographical factors regarding building performance: the rest of the explanation refers to the vulnerability of the elderly and the relationship between perception and action among female victims, who tend to be more fatalistic than men and thus did not abandon their homes between a major foreshock and the main shock of the earthquake, three hours later. In terms of casualties, earthquakes commonly discriminate against the elderly and women. Age and gender biases need further investigation and should be taken into account in seismic mitigation initiatives.

  14. Ring faults and ring dikes around the Orientale basin on the Moon.

    Science.gov (United States)

    Andrews-Hanna, Jeffrey C; Head, James W; Johnson, Brandon; Keane, James T; Kiefer, Walter S; McGovern, Patrick J; Neumann, Gregory A; Wieczorek, Mark A; Zuber, Maria T

    2018-08-01

    The Orientale basin is the youngest and best-preserved multiring impact basin on the Moon, having experienced only modest modification by subsequent impacts and volcanism. Orientale is often treated as the type example of a multiring basin, with three prominent rings outside of the inner depression: the Inner Rook Montes, the Outer Rook Montes, and the Cordillera. Here we use gravity data from NASA's Gravity Recovery and Interior Laboratory (GRAIL) mission to reveal the subsurface structure of Orientale and its ring system. Gradients of the gravity data reveal a continuous ring dike intruded into the Outer Rook along the plane of the fault associated with the ring scarp. The volume of this ring dike is ~18 times greater than the volume of all extrusive mare deposits associated with the basin. The gravity gradient signature of the Cordillera ring indicates an offset along the fault across a shallow density interface, interpreted to be the base of the low-density ejecta blanket. Both gravity gradients and crustal thickness models indicate that the edge of the central cavity is shifted inward relative to the equivalent Inner Rook ring at the surface. Models of the deep basin structure show inflections along the crust-mantle interface at both the Outer Rook and Cordillera rings, indicating that the basin ring faults extend from the surface to at least the base of the crust. Fault dips range from 13-22° for the Cordillera fault in the northeastern quadrant, to 90° for the Outer Rook in the northwestern quadrant. The fault dips for both outer rings are lowest in the northeast, possibly due to the effects of either the direction of projectile motion or regional gradients in pre-impact crustal thickness. Similar ring dikes and ring faults are observed around the majority of lunar basins.

  15. Fault-Related Sanctuaries

    Science.gov (United States)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  16. L’Aquila Smart Clean Air City: The Italian Pilot Project for Healthy Urban Air

    Directory of Open Access Journals (Sweden)

    Alessandro Avveduto

    2017-11-01

    Full Text Available Exposure to atmospheric pollution is a major concern for urban populations. Currently, no effective strategy has been adopted to tackle the problem. The paper presents the Smart Clean Air City project, a pilot experiment concerning the improvement in urban air quality. Small wet scrubber systems will be operating in a network configuration in suitable urban areas of L’Aquila city (Italy. The purpose of this work is to describe the project and show the preliminary results obtained in the characterization of two urban sites before the remediation test; the main operating principles of the wet scrubber system will be discussed, as well as the design of the mobile treatment plant for the processing of wastewater resulting from scrubber operation. Measurements of particle size distributions in the range of 0.30–25 µm took place in the two sites of interest, an urban background and a traffic area in the city of L’Aquila. The mean number concentration detected was 2.4 × 107 and 4.5 × 107 particles/m3, respectively. Finally, theoretical assessments, performed by Computational Fluid Dynamics (CFD codes, will show the effects of the wet scrubber operation on air pollutants under different environmental conditions and in several urban usage patterns.

  17. Adaptive Response of Children and Adolescents with Autism to the 2009 Earthquake in L'Aquila, Italy

    Science.gov (United States)

    Valenti, Marco; Ciprietti, Tiziana; Di Egidio, Claudia; Gabrielli, Maura; Masedu, Francesco; Tomassini, Anna Rita; Sorge, Germana

    2012-01-01

    The literature offers no descriptions of the adaptive outcomes of people with autism spectrum disorder (ASD) after natural disasters. Aim of this study was to evaluate the adaptive behaviour of participants with ASD followed for 1 year after their exposure to the 2009 earthquake in L'Aquila (Italy) compared with an unexposed peer group with ASD,…

  18. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    Science.gov (United States)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress

  19. Characterization of earthquake-induced ground motion from the L'Aquila seismic sequence of 2009, Italy

    Science.gov (United States)

    Malagnini, Luca; Akinci, Aybige; Mayeda, Kevin; Munafo', Irene; Herrmann, Robert B.; Mercuri, Alessia

    2011-01-01

    Based only on weak-motion data, we carried out a combined study on region-specific source scaling and crustal attenuation in the Central Apennines (Italy). Our goal was to obtain a reappraisal of the existing predictive relationships for the ground motion, and to test them against the strong-motion data [peak ground acceleration (PGA), peak ground velocity (PGV) and spectral acceleration (SA)] gathered during the Mw 6.15 L'Aquila earthquake (2009 April 6, 01:32 UTC). The L'Aquila main shock was not part of the predictive study, and the validation test was an extrapolation to one magnitude unit above the largest earthquake of the calibration data set. The regional attenuation was determined through a set of regressions on a data set of 12 777 high-quality, high-gain waveforms with excellent S/N ratios (4259 vertical and 8518 horizontal time histories). Seismograms were selected from the recordings of 170 foreshocks and aftershocks of the sequence (the complete set of all earthquakes with ML≥ 3.0, from 2008 October 1 to 2010 May 10). All waveforms were downloaded from the ISIDe web page (), a web site maintained by the Istituto Nazionale di Geofisica e Vulcanologia (INGV). Weak-motion data were used to obtain a moment tensor solution, as well as a coda-based moment-rate source spectrum, for each one of the 170 events of the L'Aquila sequence (2.8 ≤Mw≤ 6.15). Source spectra were used to verify the good agreement with the source scaling of the Colfiorito seismic sequence of 1997-1998 recently described by Malagnini (2008). Finally, results on source excitation and crustal attenuation were used to produce the absolute site terms for the 23 stations located within ˜80 km of the epicentral area. The complete set of spectral corrections (crustal attenuation and absolute site effects) was used to implement a fast and accurate tool for the automatic computation of moment magnitudes in the Central Apennines.

  20. Quantile arithmetic methodology for uncertainty propagation in fault trees

    International Nuclear Information System (INIS)

    Abdelhai, M.; Ragheb, M.

    1986-01-01

    A methodology based on quantile arithmetic, the probabilistic analog to interval analysis, is proposed for the computation of uncertainties propagation in fault tree analysis. The basic events' continuous probability density functions (pdf's) are represented by equivalent discrete distributions by dividing them into a number of quantiles N. Quantile arithmetic is then used to performthe binary arithmetical operations corresponding to the logical gates in the Boolean expression of the top event expression of a given fault tree. The computational advantage of the present methodology as compared with the widely used Monte Carlo method was demonstrated for the cases of summation of M normal variables through the efficiency ratio defined as the product of the labor and error ratios. The efficiency ratio values obtained by the suggested methodology for M = 2 were 2279 for N = 5, 445 for N = 25, and 66 for N = 45 when compared with the results for 19,200 Monte Carlo samples at the 40th percentile point. Another advantage of the approach is that the exact analytical value of the median is always obtained for the top event

  1. Active faulting in the central Betic Cordillera (Spain): Palaeoseismological constraint of the surface-rupturing history of the Baza Fault (Central Betic Cordillera, Iberian Peninsula)

    Science.gov (United States)

    Castro, J.; Martin-Rojas, I.; Medina-Cascales, I.; García-Tortosa, F. J.; Alfaro, P.; Insua-Arévalo, J. M.

    2018-06-01

    This paper on the Baza Fault provides the first palaeoseismic data from trenches in the central sector of the Betic Cordillera (S Spain), one of the most tectonically active areas of the Iberian Peninsula. With the palaeoseismological data we constructed time-stratigraphic OxCal models that yield probability density functions (PDFs) of individual palaeoseismic event timing. We analysed PDF overlap to quantitatively correlate the walls and site events into a single earthquake chronology. We assembled a surface-rupturing history of the Baza Fault for the last ca. 45,000 years. We postulated six alternative surface rupturing histories including 8-9 fault-wide earthquakes. We calculated fault-wide earthquake recurrence intervals using Monte Carlo. This analysis yielded a 4750-5150 yr recurrence interval. Finally, compared our results with the results from empirical relationships. Our results will provide a basis for future analyses of more of other active normal faults in this region. Moreover, our results will be essential for improving earthquake-probability assessments in Spain, where palaeoseismic data are scarce.

  2. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    Science.gov (United States)

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  3. Geoethical implications in the L'Aquila case: scientific knowledge and communication

    Science.gov (United States)

    Di Capua, Giuseppe

    2013-04-01

    On October 22nd 2012, three and a half years after the earthquake that destroyed the city of L'Aquila (central Italy), killing more than 300 people and wounding about 1,500, a landmark judgment for the scientific research established the condemnation of six members of the Major Risks Committee of the Italian Government and a researcher of INGV (Istituto Nazionale di Geofisica e Vulcanologia), called to provide information about the evolution of the seismic sequence. The judge held that these Geoscientists were negligent during the meeting of 31st March 2009, convened to discuss the scientific aspects of the seismic risk of this area, affected by a long seismic sequence, also in the light of repeated warnings about the imminence of a strong earthquake, on the base of measurements of radon gas by an Italian independent technician, transmitted to the population by mass-media. Without going into the legal aspects of the criminal proceedings, this judgment strikes for the hardness of the condemnation to be paid by the scientists (six years of imprisonment, perpetual disqualification from public office and legal disqualification during the execution of the penalty, compensation for victims up to several hundred thousands of Euros). Some of them are scientists known worldwide for their proven skills, professionalism and experience. In conclusion, these scientists were found guilty of having contributed to the death of many people, because they have not communicated in an appropriate manner all available information on the seismic hazard and vulnerability of the area of L'Aquila. This judgment represents a watershed in the way of looking at the social role of geoscientists in the defense against natural hazards and their responsibility towards the people. But, in what does this responsibility consist of? It consists of the commitment to conduct an updated and reliable scientific research, which provides for a detailed analysis of the epistemic uncertainty for a more

  4. Exploring the η Aquila System: Another Cepheid Parallax and Further Evidence for a Tertiary

    Science.gov (United States)

    Benedict, George Frederick; Barnes, Thomas G.; Evans, Nancy; Cochran, William; McArthur, Barbara E.; Harrison, Thomas E.

    2018-01-01

    We report progress towards a re-analysis of Hubble Space Telescope Fine Guidance Sensor astrometric data, originally acquired to determine a parallax for and absolute magnitudes of the classical Cepheid, η Aquila. This object was not included in past Cepheid Period-Luminosity Relation (PLR) work (Benedict et al. 2007, AJ, 133, 1810), because we had an insufficient number of epochs with which to establish a suspected and complicating companion orbit. Our new investigation is considerably aided by including a significant number of radial velocity measures (RV) from six sources, including new, high-quality Hobby-Eberly Telescope spectra. We first derive a 12 Fourier coefficient description of the Cepheid pulsation, solving for velocity offsets required to bring the six RV data sets into coincidence. We next model the RV residuals to that fit with an orbit. The resulting orbit has very high eccentricity. The astrometric residuals show only a very small perturbation, consistent with a prediction from the spectroscopic orbit. We finally include that orbit in a combined astrometry and radial velocity model. This modeling, similar to that presented in Benedict and Harrison (2017, AJ, 153, 258) yields a parallax, allowing inclusion of η Aquila in a PLR. It also establishes a Cepheid/companion mass ratio for the early-type star companion identified in IUE spectra (Evans 1991, ApJ, 372, 597).

  5. A data-driven multiplicative fault diagnosis approach for automation processes.

    Science.gov (United States)

    Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo

    2014-09-01

    This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.

  6. Pediatric Epidemic of Salmonella enterica Serovar Typhimurium in the Area of L’Aquila, Italy, Four Years after a Catastrophic Earthquake

    Directory of Open Access Journals (Sweden)

    Giovanni Nigro

    2016-05-01

    Full Text Available Background: A Salmonella enterica epidemic occurred in children of the area of L’Aquila (Central Italy, Abruzzo region between June 2013 and October 2014, four years after the catastrophic earthquake of 6 April 2009. Methods: Clinical and laboratory data were collected from hospitalized and ambulatory children. Routine investigations for Salmonella infection were carried out on numerous alimentary matrices of animal origin and sampling sources for drinking water of the L’Aquila district, including pickup points of the two main aqueducts. Results: Salmonella infection occurred in 155 children (83 females: 53%, aged 1 to 15 years (mean 2.10. Of these, 44 children (28.4% were hospitalized because of severe dehydration, electrolyte abnormalities, and fever resistant to oral antipyretic and antibiotic drugs. Three children (1.9% were reinfected within four months after primary infection by the same Salmonella strain. Four children (2.6%, aged one to two years, were coinfected by rotavirus. A seven-year old child had a concomitant right hip joint arthritis. The isolated strains, as confirmed in about the half of cases or probable/possible in the remaining ones, were identified as S. enterica serovar Typhimurium [4,5:i:-], monophasic variant. Aterno river, bordering the L’Aquila district, was recognized as the main responsible source for the contamination of local crops and vegetables derived from polluted crops. Conclusions: The high rate of hospitalized children underlines the emergence of a highly pathogenic S. enterica strain probably subsequent to the contamination of the spring water sources after geological changes occurred during the catastrophic earthquake.

  7. Pediatric Epidemic of Salmonella enterica Serovar Typhimurium in the Area of L'Aquila, Italy, Four Years after a Catastrophic Earthquake.

    Science.gov (United States)

    Nigro, Giovanni; Bottone, Gabriella; Maiorani, Daniela; Trombatore, Fabiana; Falasca, Silvana; Bruno, Gianfranco

    2016-05-06

    A Salmonella enterica epidemic occurred in children of the area of L'Aquila (Central Italy, Abruzzo region) between June 2013 and October 2014, four years after the catastrophic earthquake of 6 April 2009. Clinical and laboratory data were collected from hospitalized and ambulatory children. Routine investigations for Salmonella infection were carried out on numerous alimentary matrices of animal origin and sampling sources for drinking water of the L'Aquila district, including pickup points of the two main aqueducts. Salmonella infection occurred in 155 children (83 females: 53%), aged 1 to 15 years (mean 2.10). Of these, 44 children (28.4%) were hospitalized because of severe dehydration, electrolyte abnormalities, and fever resistant to oral antipyretic and antibiotic drugs. Three children (1.9%) were reinfected within four months after primary infection by the same Salmonella strain. Four children (2.6%), aged one to two years, were coinfected by rotavirus. A seven-year old child had a concomitant right hip joint arthritis. The isolated strains, as confirmed in about the half of cases or probable/possible in the remaining ones, were identified as S. enterica serovar Typhimurium [4,5:i:-], monophasic variant. Aterno river, bordering the L'Aquila district, was recognized as the main responsible source for the contamination of local crops and vegetables derived from polluted crops. The high rate of hospitalized children underlines the emergence of a highly pathogenic S. enterica strain probably subsequent to the contamination of the spring water sources after geological changes occurred during the catastrophic earthquake.

  8. Data-driven fault mechanics: Inferring fault hydro-mechanical properties from in situ observations of injection-induced aseismic slip

    Science.gov (United States)

    Bhattacharya, P.; Viesca, R. C.

    2017-12-01

    In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent

  9. The relation of catastrophic flooding of Mangala Valles, Mars, to faulting of Memnonia Fossae and Tharsis volcanism

    International Nuclear Information System (INIS)

    Tanaka, K.L.; Chapman, M.G.

    1990-01-01

    Detailed stratigraphic relations indicate two coeval periods of catastrophic flooding and Tharsis-centered faulting (producing Memnonia Fossae) in the Mangala Valles region of Mars. Major sequences of lava flows of the Tharsis Montes Formation and local, lobate plains flows were erupted during and between these channeling and faulting episodes. First, Late Hesperian channel development overlapped in time the Tharsis-centered faulting that trends north 75 degree to 90 degree E. Next, Late Hesperian/Early Amazonian flooding was coeval with faulting that trends north 55 degree to 70 degree E. In some reaches, resistant lava flows filled the early channels, resulting in inverted channel topography after the later flooding swept through. Both floods likely originated from the same graben, which probably was activated during each episode of faulting. Faulting broke through groundwater barriers and tapped confined aquifers in higher regions west and east of the point of discharge. The minimum volume of water required to erode Mangala Valles (about 5 x 10 12 m 3 ) may have been released through two floods that drained a few percent pore volume from a relatively permeable aquifer. The peak discharges of the floods may have lasted from days to weeks. The perched water discharged from the aquifer may have been produced by hydrothermal groundwater circulation induced by Tharsis magmatism, tectonic uplift centered at Tharsis Montes, and compacting of saturated crater ejecta due to loading by lava flows

  10. Spectroscopic follow-up of the Hercules-Aquila Cloud

    Science.gov (United States)

    Simion, Iulia T.; Belokurov, Vasily; Koposov, Sergey E.; Sheffield, Allyson; Johnston, Kathryn V.

    2018-05-01

    We designed a follow-up program to find the spectroscopic properties of the Hercules-Aquila Cloud (HAC) and test scenarios for its formation. We measured the radial velocities (RVs) of 45 RR Lyrae in the southern portion of the HAC using the facilities at the MDM observatory, producing the first large sample of velocities in the HAC. We found a double-peaked distribution in RVs, skewed slightly to negative velocities. We compared both the morphology of HAC projected on to the plane of the sky and the distribution of velocities in this structure outlined by RR Lyrae and other tracer populations at different distances to N-body simulations. We found that the behaviour is characteristic of an old, well-mixed accretion event with small apo-galactic radius. We cannot yet rule out other formation mechanisms for the HAC. However, if our interpretation is correct, HAC represents just a small portion of a much larger debris structure spread throughout the inner Galaxy whose distinct kinematic structure should be apparent in RV studies along many lines of sight.

  11. Permeability - Fluid Pressure - Stress Relationship in Fault Zones in Shales

    Science.gov (United States)

    Henry, P.; Guglielmi, Y.; Morereau, A.; Seguy, S.; Castilla, R.; Nussbaum, C.; Dick, P.; Durand, J.; Jaeggi, D.; Donze, F. V.; Tsopela, A.

    2016-12-01

    Fault permeability is known to depend strongly on stress and fluid pressures. Exponential relationships between permeability and effective pressure have been proposed to approximate fault response to fluid pressure variations. However, the applicability of these largely empirical laws remains questionable, as they do not take into account shear stress and shear strain. A series of experiments using mHPP probes have been performed within fault zones in very low permeability (less than 10-19 m2) Lower Jurassic shale formations at Tournemire (France) and Mont Terri (Switzerland) underground laboratories. These probes allow to monitor 3D displacement between two points anchored to the borehole walls at the same time as fluid pressure and flow rate. In addition, in the Mont-Terri experiment, passive pressure sensors were installed in observation boreholes. Fracture transmissivity was estimated from single borehole pulse test, constant pressure injection tests, and cross-hole tests. It is found that the transmissivity-pressure dependency can be approximated with an exponential law, but only above a pressure threshold that we call the Fracture Opening Threshold (F.O.P). The displacement data show a change of the mechanical response across the F.O.P. The displacement below the F.O.P. is dominated by borehole response, which is mostly elastic. Above F.O.P., the poro-elasto-plastic response of the fractures dominates. Stress determinations based on previous work and on the analysis of slip data from mHPPP probe indicate that the F.O.P. is lower than the least principal stress. Below the F.O.P., uncemented fractures retain some permeability, as pulse tests performed at low pressures yield diffusivities in the range 10-2 to 10-5 m2/s. Overall, this dual behavior appears consistent with the results of CORK experiments performed in accretionary wedge decollements. Results suggest (1) that fault zones become highly permeable when approaching the critical Coulomb threshold (2

  12. Smart City L’Aquila : An Application of the “Infostructure” Approach to Public Urban Mobility in a Post-Disaster Context

    NARCIS (Netherlands)

    Falco, E.; Malavolta, Ivano; Radzimski, Adam; Ruberto, Stefano; Iovino, Ludovico; Gallo, Francesco

    2017-01-01

    Ever since the earthquake of April 6, 2009 hit the city of L’Aquila, Italy, the city has been facing major challenges in terms of social, physical, and economic reconstruction. The system of public urban mobility, the bus network, is no exception with its old bus fleet, non-user-friendly

  13. Smart City L’Aquila : An Application of the “Infostructure” Approach to Public Urban Mobility in a Post-Disaster Context

    NARCIS (Netherlands)

    Falco, Enzo; Malavolta, Ivano; Radzimski, Adam; Ruberto, Stefano; Iovino, Ludovico; Gallo, Francesco

    2018-01-01

    Ever since the earthquake of April 6, 2009 hit the city of L’Aquila, Italy, the city has been facing major challenges in terms of social, physical, and economic reconstruction. The system of public urban mobility, the bus network, is no exception with its old bus fleet, non-user-friendly

  14. Aquila Remotely Piloted Vehicle System Technology Demonstration (RPV-STD) Program. Volume 3. Field Test Program

    Science.gov (United States)

    1979-04-01

    FLIGHT TESTS Tis 8ootion sumarizes ech of the Crows Landln Flight Tests, hrm I to It Deoemiber 1975. 23 2.4.1 Flight 1 Aquila RPV 001 took off at 09.42...RC pilot In the stablied RC mode. To facilitate theme attempts, an automobile , with Its headlights on high beam, was positioned on each side of the...the vans. At approxi- mately 2 to 3 km, the actual automobile headlights would become visible. Then, the operator would attempt to reposition the RPV

  15. Analytical propagation of uncertainties through fault trees

    International Nuclear Information System (INIS)

    Hauptmanns, Ulrich

    2002-01-01

    A method is presented which enables one to propagate uncertainties described by uniform probability density functions through fault trees. The approach is analytical. It is based on calculating the expected value and the variance of the top event probability. These two parameters are then equated with the corresponding ones of a beta-distribution. An example calculation comparing the analytically calculated beta-pdf (probability density function) with the top event pdf obtained using the Monte-Carlo method shows excellent agreement at a much lower expense of computing time

  16. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  17. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  18. Comparison of methods for uncertainty analysis of nuclear-power-plant safety-system fault-tree models

    International Nuclear Information System (INIS)

    Martz, H.F.; Beckman, R.J.; Campbell, K.; Whiteman, D.E.; Booker, J.M.

    1983-04-01

    A comparative evaluation is made of several methods for propagating uncertainties in actual coupled nuclear power plant safety system faults tree models. The methods considered are Monte Carlo simulation, the method of moments, a discrete distribution method, and a bootstrap method. The Monte Carlo method is found to be superior. The sensitivity of the system unavailability distribution to the choice of basic event unavailability distribution is also investigated. The system distribution is also investigated. The system distribution is especially sensitive to the choice of symmetric versus asymmetric basic event distributions. A quick-and dirty method for estimating percentiles of the system unavailability distribution is developed. The method identifies the appropriate basic event distribution percentiles that should be used in evaluating the Boolean system equivalent expression for a given fault tree model to arrive directly at the 5th, 10th, 50th, 90th, and 95th percentiles of the system unavailability distribution

  19. Application of subset simulation methods to dynamic fault tree analysis

    International Nuclear Information System (INIS)

    Liu Mengyun; Liu Jingquan; She Ding

    2015-01-01

    Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)

  20. [Psychological distress and post-traumatic stress disorder (PTSD) in young survivors of L'Aquila earthquake].

    Science.gov (United States)

    Pollice, Rocco; Bianchini, Valeria; Roncone, Rita; Casacchia, Massimo

    2012-01-01

    The aim of the study is to evaluate the presence of PTSD diagnosis, psychological distress and post-traumatic symptoms in a population of young earthquake survivors after L'Aquila earthquake. Between April 2009 and January 2010, 187 young people seeking help consecutively at the Service for Monitoring and early Intervention against psychoLogical and mEntal suffering in young people (SMILE) of L'Aquila University Psychiatric Department, underwent clinical interview with the Semi-Structured Clinical Interview DSM-IV-I and-II (SCID-I and SCID-II) and psychometric evaluation with Impact Event Scale-Revised (IES-R) and General Health Questionnaire-12 items (GHQ-12). 44.2% and 37.4% respectively, showed high and moderate levels of psychological distress. 66.7% reported the presence of a significant post-traumatic symptoms (Post-traumatic Syndrome) with an IES-R>28, while a diagnosis of PTSD was made in 13.8% of the sample. The obsessive-compulsive trait, female sex and high level of distress (GHQ ≥20) appear to be the main risk factors for the development of PTSD than those who had a post-traumatic syndrome for which the displacement and social disruption, appear to be more associated with post-traumatic aftermaths. Our findings, in line with recent literature, confirm that a natural disaster produces an high psychological distress with long-term aftermaths. Early intervention for survivors of collective or individual trauma, regardless of the presence of a PTSD diagnosis should be a primary goal in a program of Public Health.

  1. Application of seismic isolation for seismic strengthening of buildings damaged by the earthquake of L’Aquila

    International Nuclear Information System (INIS)

    Corsetti, Daniele

    2015-01-01

    The earthquake of 6 April 2009 destroyed the social and economic network fabric of the town of 'L'Aquila'. Since then, many buildings have been restored and some designers have taken the opportunity of rebuilding the town applying innovative technologies. In this context, despite the inevitable bureaucratic hurdles and economic constraints, added to the death of Mr. Mancinelli in 2012 (GLIS Member), several projects were carried out on existing buildings with the idea of applying base seismic isolation. A decade after the first application of this solution on an existing building in Fabriano by Mr. Mancinelli, the experience has proved to be a success, both in terms of achieved results and ease of management. For L’Aquila earthquake the idea was to replicate the positive experience of the “Marche earthquake”, though the problems and obstacles to face often were substantially different. The experience outlined below is a summary of the issues faced and resolved in two projects, taking into account that any solution can be further improved and refined depending on the ability and sensitivity of the designer. We have come to the conclusion that the projects of a base seismic isolation of existing buildings are 'tailor-made' projects, and that the solutions have to be analysed a case by case, even if the main concepts are simple and applicable to a wide range of buildings [it

  2. Near-infrared and optical studies of the highly obscured nova V1831 Aquilae (Nova Aquilae 2015)

    Science.gov (United States)

    Banerjee, D. P. K.; Srivastava, Mudit K.; Ashok, N. M.; Munari, U.; Hambsch, F.-J.; Righetti, G. L.; Maitan, A.

    2018-01-01

    Near-infrared (NIR) and optical photometry and spectroscopy are presented for the nova V1831 Aquilae, covering the early decline and dust-forming phases during the first ∼90 d after its discovery. The nova is highly reddened due to interstellar extinction. Based solely on the nature of the NIR spectrum, we are able to classify the nova to be of the Fe II class. The distance and extinction to the nova are estimated to be 6.1 ± 0.5 kpc and Av ∼ 9.02, respectively. Lower limits of the electron density, emission measure and ionized ejecta mass are made from a Case B analysis of the NIR Brackett lines, while the neutral gas mass is estimated from the optical [O I] lines. We discuss the cause of the rapid strengthening of the He I 1.0830-μm line during the early stages. V1831 Aql formed a modest amount of dust fairly early (∼19.2 d after discovery); the dust shell is not seen to be optically thick. Estimates of the dust temperature, dust mass and grain size are made. Dust formation commences around day 19.2 at a condensation temperature of 1461 ± 15 K, suggestive of a carbon composition, following which the temperature is seen to decrease gradually to 950 K. The dust mass shows a rapid initial increase, which we interpret as being due to an increase in the number of grains, followed by a period of constancy, suggesting the absence of grain destruction processes during this latter time. A discussion of the evolution of these parameters is made, including certain peculiarities seen in the grain radius evolution.

  3. Why local people did not present a problem in the 2016 Kumamoto earthquake, Japan though people accused in the 2009 L'Aquila earthquake?

    Science.gov (United States)

    Sugimoto, M.

    2016-12-01

    Risk communication is a big issues among seismologists after the 2009 L'Aquila earthquake all over the world. A lot of people remember 7 researchers as "L'Aquila 7" were accused in Italy. Seismologists said it is impossible to predict an earthquake by science technology today and join more outreach activities. "In a subsequent inquiry of the handling of the disaster, seven members of the Italian National Commission for the Forecast and Prevention of Major Risks were accused of giving "inexact, incomplete and contradictory" information about the danger of the tremors prior to the main quake. On 22 October 2012, six scientists and one ex-government official were convicted of multiple manslaughter for downplaying the likelihood of a major earthquake six days before it took place. They were each sentenced to six years' imprisonment (Wikipedia)". Finally 6 scientists are not guilty. The 2016 Kumamoto earthquake hit Kyushu, Japan in April. They are very similar seismological situations between the 2016 Kumamoto earthquake and the 2009 L'Aquila earthquake. The foreshock was Mj6.5 and Mw6.2 in 14 April 2016. The main shock was Mj7.3 and Mw7.0. Japan Metrological Agency (JMA) misleaded foreshock as mainshock before main shock occured. 41 people died by the main shock in Japan. However local people did not accused scientists in Japan. It has been less big earhquakes around 100 years in Kumamoto. Poeple was not so matured that they treated earthquake information in Kyushu, Japan. How are there differences between Japan and Italy? We learn about outreach activities for sciencits from this case.

  4. Thermal studies of a superconducting current limiter using Monte-Carlo method

    International Nuclear Information System (INIS)

    Leveque, J.; Rezzoug, A.

    1999-01-01

    Considering the increase of the fault current level in electrical network, the current limiters become very interesting. The superconducting limiters are based on the quasi-instantaneous intrinsic transition from superconducting state to normal resistive one. Without detection of default or given order, they reduce the constraints supported by electrical installations above the fault. To avoid the destruction of the superconducting coil, the temperature must not exceed a certain value. Therefore the design of a superconducting coil needs the simultaneous resolution of an electrical equation and a thermal one. This papers deals with a resolution of this coupled problem by the method of Monte-Carlo. This method allows us to calculate the evolution of the resistance of the coil as well as the current of limitation. Experimental results are compared with theoretical ones. (orig.)

  5. Fault morphology of the lyo Fault, the Median Tectonic Line Active Fault System

    OpenAIRE

    後藤, 秀昭

    1996-01-01

    In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ~ ...

  6. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  7. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  8. S.S. Annunziata Church (L'Aquila, Italy) unveiled by non- and micro-destructive testing techniques

    Science.gov (United States)

    Sfarra, Stefano; Cheilakou, Eleni; Theodorakeas, Panagiotis; Paoletti, Domenica; Koui, Maria

    2017-03-01

    The present research work explores the potential of an integrated inspection methodology, combining Non-destructive testing and micro-destructive analytical techniques, for both the structural assessment of the S.S. Annunziata Church located in Roio Colle (L'Aquila, Italy) and the characterization of its wall paintings' pigments. The study started by applying passive thermal imaging for the structural monitoring of the church before and after the application of a consolidation treatment, while active thermal imaging was further used for assessing this consolidation procedure. After the earthquake of 2009, which seriously damaged the city of L'Aquila and its surroundings, part of the internal plaster fell off revealing the presence of an ancient mural painting that was subsequently investigated by means of a combined analytical approach involving portable VIS-NIR fiber optics diffuse reflectance spectroscopy (FORS) and laboratory methods, such as environmental scanning electron microscopy (ESEM) coupled with energy dispersive X-ray analysis (EDX), and attenuated total reflectance-fourier transform infrared spectroscopy (ATR-FTIR). The results obtained from the thermographic analysis provided information concerning the two different constrictive phases of the Church, enabled the assessment of the consolidation treatment, and contributed to the detection of localized problems mainly related to the rising damp phenomenon and to biological attack. In addition, the results obtained from the combined analytical approach allowed the identification of the wall painting pigments (red and yellow ochre, green earth, and smalt) and provided information on the binding media and the painting technique possibly applied by the artist. From the results of the present study, it is possible to conclude that the joint use of the above stated methods into an integrated methodology can produce the complete set of useful information required for the planning of the Church's restoration

  9. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  10. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  11. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    International Nuclear Information System (INIS)

    Cumbest, R.J.

    2000-01-01

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

  12. Field characterization of elastic properties across a fault zone reactivated by fluid injection

    Science.gov (United States)

    Jeanne, Pierre; Guglielmi, Yves; Rutqvist, Jonny; Nussbaum, Christophe; Birkholzer, Jens

    2017-08-01

    We studied the elastic properties of a fault zone intersecting the Opalinus Clay formation at 300 m depth in the Mont Terri Underground Research Laboratory (Switzerland). Four controlled water injection experiments were performed in borehole straddle intervals set at successive locations across the fault zone. A three-component displacement sensor, which allowed capturing the borehole wall movements during injection, was used to estimate the elastic properties of representative locations across the fault zone, from the host rock to the damage zone to the fault core. Young's moduli were estimated by both an analytical approach and numerical finite difference modeling. Results show a decrease in Young's modulus from the host rock to the damage zone by a factor of 5 and from the damage zone to the fault core by a factor of 2. In the host rock, our results are in reasonable agreement with laboratory data showing a strong elastic anisotropy characterized by the direction of the plane of isotropy parallel to the laminar structure of the shale formation. In the fault zone, strong rotations of the direction of anisotropy can be observed. The plane of isotropy can be oriented either parallel to bedding (when few discontinuities are present), parallel to the direction of the main fracture family intersecting the zone, and possibly oriented parallel or perpendicular to the fractures critically oriented for shear reactivation (when repeated past rupture along this plane has created a zone).

  13. Mont Terri project, cyclic deformations in the Opalinus Clay

    International Nuclear Information System (INIS)

    Moeri, A.; Bossart, P.; Matray, J.M.; Mueller, H.; Frank, E.

    2010-01-01

    Document available in extended abstract form only. Shrinkage structures in the Opalinus Clay, related to seasonal changes in temperature and humidity, are observed on the tunnel walls of the Mont Terri Rock Laboratory. The structures open in winter, when relative humidity in the tunnel decreases to 65%. In summer the cracks close again because of the increase in the clay volume when higher humidity causes rock swelling. Shrinkage structures are monitored in the Mont Terri Rock Laboratory at two different sites within the undisturbed rock matrix and a major fault zone. The relative movements of the rock on both sides of the cracks are monitored in three directions and compared to the fluctuations in ambient relative humidity and temperature. The cyclic deformations (CD) experiment aims to quantify the variations in crack opening in relation to the evolution of climatic conditions and to identify the processes underlying these swell and shrinkage cycles. It consists of the following tasks: - Measuring and quantifying the long-term (now up to three yearly cycles) opening and closing and, if present, the associated shear displacements of selected shrinkage cracks along an undisturbed bedding plane as well as within a major fault zone ('Main Fault'). The measurements are accompanied by temperature and humidity records as well as by a long-term monitoring of tunnel convergence. - Analysing at the micro-scale the surfaces of the crack planes to identify potential relative movements, changes in the rock fabric on the crack surfaces and the formation of fault gouge material as observed in closed cracks. - Processing and analysing measured fluctuations of crack apertures and rock deformation in the time series as well as in the hydro-meteorological variables, in particular relative humidity Hr(t) and air temperature. - Studying and reconstructing the opening cycles on a drill-core sample under well-known laboratory conditions and observing potential propagation of

  14. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  15. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  16. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  17. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  18. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Lindhe, Andreas; Norberg, Tommy; Rosén, Lars

    2012-01-01

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  19. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  20. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  1. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Science.gov (United States)

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  2. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  3. MCNP load balancing and fault tolerance with PVM

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1995-01-01

    Version 4A of the Monte Carlo neutron, photon, and electron transport code MCNP, developed by LANL (Los Alamos National Laboratory), supports distributed-memory multiprocessing through the software package PVM (Parallel Virtual Machine, version 3.1.4). Using PVM for interprocessor communication, MCNP can simultaneously execute a single problem on a cluster of UNIX-based workstations. This capability provided system efficiencies that exceeded 80% on dedicated workstation clusters, however, on heterogeneous or multiuser systems, the performance was limited by the slowest processor (i.e., equal work was assigned to each processor). The next public release of MCNP will provide multiprocessing enhancements that include load balancing and fault tolerance which are shown to dramatically increase multiuser system efficiency and reliability

  4. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  5. Fault isolability conditions for linear systems with additive faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  6. Limits on the potential accuracy of earthquake risk evaluations using the L’Aquila (Italy earthquake as an example

    Directory of Open Access Journals (Sweden)

    John Douglas

    2015-06-01

    Full Text Available This article is concerned with attempting to ‘predict’ (hindcast the damage caused by the L’Aquila 2009 earthquake (Mw 6.3 and, more generally, with the question of how close predicted damage can ever be to observations. Damage is hindcast using a well-established empirical-based approach based on vulnerability indices and macroseismic intensities, adjusted for local site effects. Using information that was available before the earthquake and assuming the same event characteristics as the L’Aquila mainshock, the overall damage is reasonably well predicted but there are considerable differences in the damage pattern. To understand the reasons for these differences, information that was only available after the event were include within the calculation. Despite some improvement in the predicted damage, in particularly by the modification of the vulnerability indices and the parameter influencing the width of the damage distribution, these hindcasts do not match all the details of the observations. This is because of local effects: both in terms of the ground shaking, which is only detectable by the installation of a much denser strong-motion network and a detailed microzonation, and in terms of the building vulnerability, which cannot be modeled using a statistical approach but would require detailed analytical modeling for which calibration data are likely to be lacking. Future studies should concentrate on adjusting the generic components of the approach to make them more applicable to their location of interest. To increase the number of observations available to make these adjustments, we encourage the collection of damage states (and not just habitability classes following earthquakes and also the installation of dense strong-motion networks in built-up areas.

  7. Predators as prey at a Golden Eagle Aquila chrysaetos eyrie in Mongolia

    Science.gov (United States)

    Ellis, D.H.; Tsengeg, Pu; Whitlock, P.; Ellis, Merlin H.

    2000-01-01

    Although golden eagles (Aquila chrysaetos) have for decades been known to occasionally take large or dangerous quarry, the capturing of such was generally believed to be rare and/or the act of starved birds. This report provides details of an exceptional diet at a golden eagle eyrie in eastern Mongolia with unquantified notes on the occurrence of foxes at other eyries in Mongolia. Most of the prey we recorded were unusual, including 1 raven (Corvus corax), 3 demoiselle cranes (Anthropoides virgo), 1 upland buzzard (Buteo hemilasius), 3 owls, 27 foxes, and 11 Mongolian gazelles. Some numerical comparisons are of interest. Our value for gazelle calves (10 minimum count, 1997) represents 13% of 78 prey items and at least one adult was also present. Our total of only 15 hares (Lepus tolai) and 4 marmots (Marmota sibirica) compared to 27 foxes suggests not so much a preference for foxes, but rather that populations of more normal prey were probably depressed at this site. Unusual prey represented 65% of the diet at this eyrie.

  8. Fault diagnosis and fault-tolerant control based on adaptive control approach

    CERN Document Server

    Shen, Qikun; Shi, Peng

    2017-01-01

    This book provides recent theoretical developments in and practical applications of fault diagnosis and fault tolerant control for complex dynamical systems, including uncertain systems, linear and nonlinear systems. Combining adaptive control technique with other control methodologies, it investigates the problems of fault diagnosis and fault tolerant control for uncertain dynamic systems with or without time delay. As such, the book provides readers a solid understanding of fault diagnosis and fault tolerant control based on adaptive control technology. Given its depth and breadth, it is well suited for undergraduate and graduate courses on linear system theory, nonlinear system theory, fault diagnosis and fault tolerant control techniques. Further, it can be used as a reference source for academic research on fault diagnosis and fault tolerant control, and for postgraduates in the field of control theory and engineering. .

  9. A summary of the active fault investigation in the extension sea area of Kikugawa fault and the Nishiyama fault , N-S direction fault in south west Japan

    Science.gov (United States)

    Abe, S.

    2010-12-01

    In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we

  10. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  11. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  12. New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-01

    New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion Hoby Njara Tendrisoa Razafindrakoto Earthquake source inversion is a non-linear problem that leads to non-unique solutions. The aim of this dissertation is to understand the uncertainty and reliability in earthquake source inversion, as well as to quantify variability in earthquake rupture models. The source inversion is performed using a Bayesian inference. This technique augments optimization approaches through its ability to image the entire solution space which is consistent with the data and prior information. In this study, the uncertainty related to the choice of source-time function and crustal structure is investigated. Three predefined analytical source-time functions are analyzed; isosceles triangle, Yoffe with acceleration time of 0.1 and 0.3 s. The use of the isosceles triangle as source-time function is found to bias the finite-fault source inversion results. It accelerates the rupture to propagate faster compared to that of the Yoffe function. Moreover, it generates an artificial linear correlation between parameters that does not exist for the Yoffe source-time functions. The effect of inadequate knowledge of Earth’s crustal structure in earthquake rupture models is subsequently investigated. The results show that one-dimensional structure variability leads to parameters resolution changes, with a broadening of the posterior 5 PDFs and shifts in the peak location. These changes in the PDFs of kinematic parameters are associated with the blurring effect of using incorrect Earth structure. As an application to real earthquake, finite-fault source models for the 2009 L’Aquila earthquake are examined using one- and three-dimensional crustal structures. One- dimensional structure is found to degrade the data fitting. However, there is no significant effect on the rupture parameters aside from differences in the spatial slip extension. Stable features are maintained for both

  13. HAVmS: Highly Available Virtual Machine Computer System Fault Tolerant with Automatic Failback and Close to Zero Downtime

    Directory of Open Access Journals (Sweden)

    Memmo Federici

    2014-12-01

    Full Text Available In scientic computing, systems often manage computations that require continuous acquisition of of satellite data and the management of large databases, as well as the execution of analysis software and simulation models (e.g. Monte Carlo or molecular dynamics cell simulations which may require several weeks of continuous run. These systems, consequently, should ensure the continuity of operation even in case of serious faults. HAVmS (High Availability Virtual machine System is a highly available, "fault tolerant" system with zero downtime in case of fault. It is based on the use of Virtual Machines and implemented by two servers with similar characteristics. HAVmS, thanks to the developed software solutions, is unique in its kind since it automatically failbacks once faults have been fixed. The system has been designed to be used both with professional or inexpensive hardware and supports the simultaneous execution of multiple services such as: web, mail, computing and administrative services, uninterrupted computing, data base management. Finally the system is cost effective adopting exclusively open source solutions, is easily manageable and for general use.

  14. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

  15. Fault-weighted quantification method of fault detection coverage through fault mode and effect analysis in digital I&C systems

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jaehyun; Lee, Seung Jun, E-mail: sjlee420@unist.ac.kr; Jung, Wondea

    2017-05-15

    Highlights: • We developed the fault-weighted quantification method of fault detection coverage. • The method has been applied to specific digital reactor protection system. • The unavailability of the module had 20-times difference with the traditional method. • Several experimental tests will be effectively prioritized using this method. - Abstract: The one of the most outstanding features of a digital I&C system is the use of a fault-tolerant technique. With an awareness regarding the importance of thequantification of fault detection coverage of fault-tolerant techniques, several researches related to the fault injection method were developed and employed to quantify a fault detection coverage. In the fault injection method, each injected fault has a different importance because the frequency of realization of every injected fault is different. However, there have been no previous studies addressing the importance and weighting factor of each injected fault. In this work, a new method for allocating the weighting to each injected fault using the failure mode and effect analysis data was proposed. For application, the fault-weighted quantification method has also been applied to specific digital reactor protection system to quantify the fault detection coverage. One of the major findings in an application was that we may estimate the unavailability of the specific module in digital I&C systems about 20-times smaller than real value when we use a traditional method. The other finding was that we can also classify the importance of the experimental case. Therefore, this method is expected to not only suggest an accurate quantification procedure of fault-detection coverage by weighting the injected faults, but to also contribute to an effective fault injection experiment by sorting the importance of the failure categories.

  16. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  17. COMUNALIDAD Y BUEN VIVIR COMO ESTRATEGIAS INDÍGENAS FRENTE A LA VIOLENCIA EN MICHOACÁN: LOS CASOS DE CHERÁN Y SAN MIGUEL DE AQUILA

    Directory of Open Access Journals (Sweden)

    Josefina María Cendejas

    2015-06-01

    Full Text Available La escalada de violencia en la que vive el estado occidental de Michoacán, México, desde 2006, ha afectado con especial virulencia a las regiones de la Tierra Caliente, la Sierra Costa y la Meseta Purépecha. En este artículo se abordan dos casos de comunidades indígenas, una purépecha: Cherán, y una nahua: San Miguel de Aquila. Se describen y comparan las respuestas de ambas ante los embates del crimen organizado, en busca de los elementos que expliquen los resultados dramáticamente distintos que han obtenido a partir de sus respectivas iniciativas de respuesta colectiva ante la violencia. El enfoque desde la ecología política permite analizar la problemática de ambos casos como resultado del «asalto global a los bienes comunes», mientras que las nociones de comunalidad y buen vivir resultan pertinentes para identificar las fortalezas, las debilidades y las posibles consecuencias a futuro de los movimientos sociales. COMMUNALITY AND BUEN VIVIR AS INDIGENOUS STRATEGIES TO FACE VIOLENCE IN MICHOACAN: THE CASES OF CHERÁN AND SAN MIGUEL DE AQUILA The escalation of violence experienced since 2006 in the Western state of Michoacan, Mexico, has significantly affected the regions of Tierra Caliente, Sierra Costa and Meseta Purépecha. This article addresses two cases of indigenous communities, a Purepecha community in Cherán, and a Nahua community in San Miguel de Aquila. The collective responses of these two communities to the attacks of organized crime are described and compared in search of elements to explain the dramatically different results obtained by both communities. An approach from the perspective of political ecology allows for an analysis of the issues faced by each one of them as a result of the «global assault on common goods». The notions of comunalidad and buen vivir ‘good living’ are germane to an identification of strengths, weaknesses and possible future consequences of the social movements.

  18. Remote Sensing of Urban Microclimate Change in L’Aquila City (Italy after Post-Earthquake Depopulation in an Open Source GIS Environment

    Directory of Open Access Journals (Sweden)

    Valerio Baiocchi

    2017-02-01

    Full Text Available This work reports a first attempt to use Landsat satellite imagery to identify possible urban microclimate changes in a city center after a seismic event that affected L’Aquila City (Abruzzo Region, Italy, on 6 April 2009. After the main seismic event, the collapse of part of the buildings, and the damaging of most of them, with the consequence of an almost total depopulation of the historic city center, may have caused alterations to the microclimate. This work develops an inexpensive work flow—using Landsat Enhanced Thematic Mapper Plus (ETM+ scenes—to construct the evolution of urban land use after the catastrophic main seismic event that hit L’Aquila. We hypothesized, that, possibly, before the event, the temperature was higher in the city center due to the presence of inhabitants (and thus home heating; while the opposite case occurred in the surrounding areas, where new settlements of inhabitants grew over a period of a few months. We decided not to look to independent meteorological data in order to avoid being biased in their investigations; thus, only the smallest dataset of Landsat ETM+ scenes were considered as input data in order to describe the thermal evolution of the land surface after the earthquake. We managed to use the Landsat archive images to provide thermal change indications, useful for understanding the urban changes induced by catastrophic events, setting up an easy to implement, robust, reproducible, and fast procedure.

  19. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  20. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan

    2017-05-31

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  1. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan; Hanafy, Sherif; Guo, Bowen; Kosmicki, Maximillian Sunflower

    2017-01-01

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  2. Architecture of thrust faults with alongstrike variations in fault-plane dip: anatomy of the Lusatian Fault, Bohemian Massif

    Czech Academy of Sciences Publication Activity Database

    Coubal, Miroslav; Adamovič, Jiří; Málek, Jiří; Prouza, V.

    2014-01-01

    Roč. 59, č. 3 (2014), s. 183-208 ISSN 1802-6222 Institutional support: RVO:67985831 ; RVO:67985891 Keywords : fault architecture * fault plane geometry * drag structures * thrust fault * sandstone * Lusatian Fault Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.405, year: 2014

  3. Geomechanical behaviour of Opalinus Clay at multiple scales: results from Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Amann, F.; Wild, K.M.; Loew, S. [Institute of Geology, Engineering Geology, Swiss Federal Institute of Technology, Zurich (Switzerland); Yong, S. [Knight Piesold Ltd, Vancouver (Canada); Thoeny, R. [Grundwasserschutz und Entsorgung, AF-Consult Switzerland AG, Baden (Switzerland); Frank, E. [Sektion Geologie (GEOL), Eidgenössisches Nuklear-Sicherheitsinspektorat (ENSI), Brugg (Switzerland)

    2017-04-15

    The paper represents a summary about our research projects conducted between 2003 and 2015 related to the mechanical behaviour of Opalinus Clay at Mont Terri. The research summarized covers a series of laboratory and field tests that address the brittle failure behaviour of Opalinus Clay, its undrained and effective strength, the dependency of petro-physical and mechanical properties on total suction, hydro-mechanically coupled phenomena and the development of a damage zone around excavations. On the laboratory scale, even simple laboratory tests are difficult to interpret and uncertainties remain regarding the representativeness of the results. We show that suction may develop rapidly after core extraction and substantially modifies the strength, stiffness, and petro-physical properties of Opalinus Clay. Consolidated undrained tests performed on fully saturated specimens revealed a relatively small true cohesion and confirmed the strong hydro-mechanically coupled behaviour of this material. Strong hydro-mechanically coupled processes may explain the stability of cores and tunnel excavations in the short term. Pore-pressure effects may cause effective stress states that favour stability in the short term but may cause longer-term deformations and damage as the pore-pressure dissipates. In-situ observations show that macroscopic fracturing is strongly influenced by bedding planes and faults planes. In tunnel sections where opening or shearing along bedding planes or faults planes is kinematically free, the induced fracture type is strongly dependent on the fault plane frequency and orientation. A transition from extensional macroscopic failure to shearing can be observed with increasing fault plane frequency. In zones around the excavation where bedding plane shearing/shearing along tectonic fault planes is kinematically restrained, primary extensional type fractures develop. In addition, heterogeneities such as single tectonic fault planes or fault zones

  4. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    Science.gov (United States)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  5. The environmental project of the enhancement of the fluvial area: L’Aquila and the Aterno River

    Directory of Open Access Journals (Sweden)

    Luciana Mastrolonardo

    2016-06-01

    Full Text Available Il contributo si colloca in un programma interdisciplinare volto alla valorizzazione e la tutela del fiume, attraverso il controllo dei flussi e l’attivazione di simbiosi. Si integrano, con un approccio che parte dalla progettazione ambientale, le tematiche urbane, paesaggistiche, tecnologiche ed ecologiche, per orientare lo sviluppo del territorio in termini di tutela e valorizzazione delle risorse, e per recuperare le discontinuità rappresentata oggi dal fiume, in alcuni contesti urbani, conferendole maggiore riconoscibilità e potenzialità. Nello specifico si indaga il rapporto tra L’Aquila e il fiume Aterno per individuare, a livello locale, le strategie perseguibili per il recupero delle connessioni tra l’ambito fluviale e urbano, per la valorizzazione del territorio e il ripristino della funzionalità dell’acqua nel suo ciclo vitale. 

  6. Robust Mpc for Actuator–Fault Tolerance Using Set–Based Passive Fault Detection and Active Fault Isolation

    Directory of Open Access Journals (Sweden)

    Xu Feng

    2017-03-01

    Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

  7. Determining on-fault magnitude distributions for a connected, multi-fault system

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2017-12-01

    A new method is developed to determine on-fault magnitude distributions within a complex and connected multi-fault system. A binary integer programming (BIP) method is used to distribute earthquakes from a 10 kyr synthetic regional catalog, with a minimum magnitude threshold of 6.0 and Gutenberg-Richter (G-R) parameters (a- and b-values) estimated from historical data. Each earthquake in the synthetic catalog can occur on any fault and at any location. In the multi-fault system, earthquake ruptures are allowed to branch or jump from one fault to another. The objective is to minimize the slip-rate misfit relative to target slip rates for each of the faults in the system. Maximum and minimum slip-rate estimates around the target slip rate are used as explicit constraints. An implicit constraint is that an earthquake can only be located on a fault (or series of connected faults) if it is long enough to contain that earthquake. The method is demonstrated in the San Francisco Bay area, using UCERF3 faults and slip-rates. We also invoke the same assumptions regarding background seismicity, coupling, and fault connectivity as in UCERF3. Using the preferred regional G-R a-value, which may be suppressed by the 1906 earthquake, the BIP problem is deemed infeasible when faults are not connected. Using connected faults, however, a solution is found in which there is a surprising diversity of magnitude distributions among faults. In particular, the optimal magnitude distribution for earthquakes that participate along the Peninsula section of the San Andreas fault indicates a deficit of magnitudes in the M6.0- 7.0 range. For the Rodgers Creek-Hayward fault combination, there is a deficit in the M6.0- 6.6 range. Rather than solving this as an optimization problem, we can set the objective function to zero and solve this as a constraint problem. Among the solutions to the constraint problem is one that admits many more earthquakes in the deficit magnitude ranges for both faults

  8. [The hazards of reconstruction: anthropology of dwelling and social health risk in the L'Aquila (Central Italy) post-earthquake].

    Science.gov (United States)

    Ciccozzi, Antonello

    2016-01-01

    Even starting from the purpose of restoring the damage caused by a natural disaster, the post-earthquake reconstructions imply the risk of triggering a set of social disasters that may affect the public health sphere. In the case of the L'Aquila earthquake this risk seems to emerge within the urban planning on two levels of dwelling: at a landscape level, where there has been a change in the shape of the city towards a sprawling-sprinkling process; at an architectural level, on the problematic relationship between the politics and the poetics of cultural heritage protection and the goal to get restoration works capable to ensure the citizens seismic safety.

  9. Fault-Tolerant Approach for Modular Multilevel Converters under Submodule Faults

    DEFF Research Database (Denmark)

    Deng, Fujin; Tian, Yanjun; Zhu, Rongwu

    2016-01-01

    The modular multilevel converter (MMC) is attractive for medium- or high-power applications because of the advantages of its high modularity, availability, and high power quality. The fault-tolerant operation is one of the important issues for the MMC. This paper proposed a fault-tolerant approach...... for the MMC under submodule (SM) faults. The characteristic of the MMC with arms containing different number of healthy SMs under faults is analyzed. Based on the characteristic, the proposed approach can effectively keep the MMC operation as normal under SM faults. It can effectively improve the MMC...

  10. Damage and recovery of historic buildings: The experience of L’Aquila

    International Nuclear Information System (INIS)

    Modena, Claudio; Valluzzi, Maria Rosa; Da Porto, Franca; Munari, Marco

    2015-01-01

    Problems range from the same definition and choice of the “conventional” safety level, to the methodologies that can be used to perform reliable structural analyses and safety verifications (as modern ones are frequently not suitable for the construction under consideration) and to the selection, design and execution of appropriate materials and interventions techniques aimed to repair and strengthen the built heritage while preserving its cultural, historic, artistic values. The earthquake that struck the Abruzzo region on 6. April 2009 at 3:32 a.m., had its epicentre in the capital of the region, L’Aquila, and seriously affected a wide area around the city, where many historic towns and villages are found. Lessons learned from this event gave relevant contributions to develop specific tools, to appropriately tackle the above mentioned problems, available to practitioner engineers and architects: methodology to intervene on complex and connected buildings in the historic centres, definition of adequate materials and techniques to intervene on the damaged buildings, codes and codes of practice specific for historic constructions. A short review of all the mentioned aspects is presented in the paper, making specific reference to research activities, practical applications and to the recent evolution of codes and guidelines [it

  11. Rectifier Fault Diagnosis and Fault Tolerance of a Doubly Fed Brushless Starter Generator

    Directory of Open Access Journals (Sweden)

    Liwei Shi

    2015-01-01

    Full Text Available This paper presents a rectifier fault diagnosis method with wavelet packet analysis to improve the fault tolerant four-phase doubly fed brushless starter generator (DFBLSG system reliability. The system components and fault tolerant principle of the high reliable DFBLSG are given. And the common fault of the rectifier is analyzed. The process of wavelet packet transforms fault detection/identification algorithm is introduced in detail. The fault tolerant performance and output voltage experiments were done to gather the energy characteristics with a voltage sensor. The signal is analyzed with 5-layer wavelet packets, and the energy eigenvalue of each frequency band is obtained. Meanwhile, the energy-eigenvalue tolerance was introduced to improve the diagnostic accuracy. With the wavelet packet fault diagnosis, the fault tolerant four-phase DFBLSG can detect the usual open-circuit fault and operate in the fault tolerant mode if there is a fault. The results indicate that the fault analysis techniques in this paper are accurate and effective.

  12. Fault displacement along the Naruto-South fault, the Median Tectonic Line active fault system in the eastern part of Shikoku, southwestern Japan

    OpenAIRE

    高田, 圭太; 中田, 高; 後藤, 秀昭; 岡田, 篤正; 原口, 強; 松木, 宏彰

    1998-01-01

    The Naruto-South fault is situated of about 1000m south of the Naruto fault, the Median Tectonic Line active fault system in the eastern part of Shikoku. We investigated fault topography and subsurface geology of this fault by interpretation of large scale aerial photographs, collecting borehole data and Geo-Slicer survey. The results obtained are as follows; 1) The Naruto-South fault runs on the Yoshino River deltaic plain at least 2.5 km long with fault scarplet. the Naruto-South fault is o...

  13. Robust Fault Diagnosis Design for Linear Multiagent Systems with Incipient Faults

    Directory of Open Access Journals (Sweden)

    Jingping Xia

    2015-01-01

    Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

  14. Stafford fault system: 120 million year fault movement history of northern Virginia

    Science.gov (United States)

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  15. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  16. Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults

    Science.gov (United States)

    Keener, Charles; Serpa, Laura; Pavlis, Terry L.

    1993-04-01

    New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

  17. Real-time fault diagnosis and fault-tolerant control

    OpenAIRE

    Gao, Zhiwei; Ding, Steven X.; Cecati, Carlo

    2015-01-01

    This "Special Section on Real-Time Fault Diagnosis and Fault-Tolerant Control" of the IEEE Transactions on Industrial Electronics is motivated to provide a forum for academic and industrial communities to report recent theoretic/application results in real-time monitoring, diagnosis, and fault-tolerant design, and exchange the ideas about the emerging research direction in this field. Twenty-three papers were eventually selected through a strict peer-reviewed procedure, which represent the mo...

  18. Fault kinematics and localised inversion within the Troms-Finnmark Fault Complex, SW Barents Sea

    Science.gov (United States)

    Zervas, I.; Omosanya, K. O.; Lippard, S. J.; Johansen, S. E.

    2018-04-01

    The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

  19. Ileo-ceco-rectal Intussusception Requiring Intestinal Resection and Anastomosis in a Tawny Eagle (Aquila rapax).

    Science.gov (United States)

    Sabater, Mikel; Huynh, Minh; Forbes, Neil

    2015-03-01

    A 23-year-old male tawny eagle (Aquila rapax) was examined because of sudden onset of lethargy, regurgitation, and hematochezia. An intestinal obstruction was suspected based on radiographic findings, and an ileo-ceco-rectal intussusception was confirmed by coelioscopy. A 14.3-cm section of intestine was resected before an intestinal anastomosis was done. Coelomic endoscopic examination confirmed a postsurgical complication of adhesions between the intestinal anastomosis and the dorsal coelomic wall, resulting in a partial luminal stricture and requiring surgical removal of the adhesions. Rectoscopy was useful in diagnosing a mild luminal stricture related to the second surgery. Complete recovery was observed 2 months after surgery. Lack of further complications in the 2 years after surgery demonstrates good tolerance of intestinal resection and anastomosis of a large segment of bowel in an eagle. This report is the first reported case of intussusception in an eagle and emphasizes the potential use of endoscopic examination in the diagnosis as well as in the management of complications.

  20. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  1. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  2. Monte Carlo simulations and benchmark studies at CERN's accelerator chain

    CERN Document Server

    AUTHOR|(CDS)2083190; Brugger, Markus

    2016-01-01

    Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.

  3. Study on seismic hazard assessment of large active fault systems. Evolution of fault systems and associated geomorphic structures: fault model test and field survey

    International Nuclear Information System (INIS)

    Ueta, Keichi; Inoue, Daiei; Miyakoshi, Katsuyoshi; Miyagawa, Kimio; Miura, Daisuke

    2003-01-01

    Sandbox experiments and field surveys were performed to investigate fault system evolution and fault-related deformation of ground surface, the Quaternary deposits and rocks. The summary of the results is shown below. 1) In the case of strike-slip faulting, the basic fault sequence runs from early en echelon faults and pressure ridges through linear trough. The fault systems associated with the 2000 western Tottori earthquake are shown as en echelon pattern that characterize the early stage of wrench tectonics, therefore no thoroughgoing surface faulting was found above the rupture as defined by the main shock and aftershocks. 2) Low-angle and high-angle reverse faults commonly migrate basinward with time, respectively. With increasing normal fault displacement in bedrock, normal fault develops within range after reverse fault has formed along range front. 3) Horizontal distance of surface rupture from the bedrock fault normalized by the height of the Quaternary deposits agrees well with those of model tests. 4) Upward-widening damage zone, where secondary fractures develop, forms in the handing wall side of high-angle reverse fault at the Kamioka mine. (author)

  4. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  5. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  6. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  7. On the morphology of outbursts of accreting millisecond X-ray pulsar Aquila X-1

    Science.gov (United States)

    Güngör, C.; Ekşi, K. Y.; Göğüş, E.

    2017-10-01

    We present the X-ray light curves of the last two outbursts - 2014 & 2016 - of the well known accreting millisecond X-ray pulsar (AMXP) Aquila X-1 using the monitor of all sky X-ray image (MAXI) observations in the 2-20 keV band. After calibrating the MAXI count rates to the all-sky monitor (ASM) level, we report that the 2016 outburst is the most energetic event of Aql X-1, ever observed from this source. We show that 2016 outburst is a member of the long-high class according to the classification presented by Güngör et al. with ˜ 68 cnt/s maximum flux and ˜ 60 days duration time and the previous outburst, 2014, belongs to the short-low class with ˜ 25 cnt/s maximum flux and ˜ 30 days duration time. In order to understand differences between outbursts, we investigate the possible dependence of the peak intensity to the quiescent duration leading to the outburst and find that the outbursts following longer quiescent episodes tend to reach higher peak energetic.

  8. Scissoring Fault Rupture Properties along the Median Tectonic Line Fault Zone, Southwest Japan

    Science.gov (United States)

    Ikeda, M.; Nishizaka, N.; Onishi, K.; Sakamoto, J.; Takahashi, K.

    2017-12-01

    The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

  9. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  10. Failure mode analysis using state variables derived from fault trees with application

    International Nuclear Information System (INIS)

    Bartholomew, R.J.

    1982-01-01

    Fault Tree Analysis (FTA) is used extensively to assess both the qualitative and quantitative reliability of engineered nuclear power systems employing many subsystems and components. FTA is very useful, but the method is limited by its inability to account for failure mode rate-of-change interdependencies (coupling) of statistically independent failure modes. The state variable approach (using FTA-derived failure modes as states) overcomes these difficulties and is applied to the determination of the lifetime distribution function for a heat pipe-thermoelectric nuclear power subsystem. Analyses are made using both Monte Carlo and deterministic methods and compared with a Markov model of the same subsystem

  11. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

  12. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  13. Stress and Burnout in Health-Care Workers after the 2009 L’Aquila Earthquake: A Cross-Sectional Observational Study

    Science.gov (United States)

    Mattei, Antonella; Fiasca, Fabiana; Mazzei, Mariachiara; Necozione, Stefano; Bianchini, Valeria

    2017-01-01

    Burnout is a work-related mental health impairment, which is now recognized as a real problem in the context of the helping professions due to its adverse health outcomes on efficiency. To our knowledge, the literature on the postdisaster scenario in Italy is limited by a focus on mental health professionals rather than other health-care workers. Our cross-sectional study aims to evaluate the prevalence of burnout and psychopathological distress in different categories of health-care workers, i.e., physicians, nurses, and health-care assistants, working in different departments of L’Aquila St. Salvatore General Hospital 6 years after the 2009 earthquake in order to prevent and reduce work-related burnout. With a two-stage cluster sampling, a total of 8 departments out of a total of 28 departments were selected and the total sample included 300 health-care workers. All the participants completed the following self-reporting questionnaires: a sociodemographic data form, a Maslach Burnout Inventory and a General Health Questionnaire 12 Items (GHQ-12). Statistically significant differences emerged between the total scores of the GHQ-12: post hoc analysis showed that the total average scores of the GHQ-12 were significantly higher in doctors than in health-care assistants. A high prevalence of burnout among doctors (25.97%) emerged. Using multivariate analysis, we identified a hostile relationship with colleagues, direct exposure to the L’Aquila earthquake and moderate to high levels of distress as being burnout predictors. Investigating the prevalence of burnout and distress in health-care staff in a postdisaster setting and identifying predictors of burnout development such as stress levels, time-management skills and work-life balance will contribute to the development of preventative strategies and better organization at work with a view to improving public health efficacy and reducing public health costs, given that these workers live in the disaster

  14. Stress and Burnout in Health-Care Workers after the 2009 L'Aquila Earthquake: A Cross-Sectional Observational Study.

    Science.gov (United States)

    Mattei, Antonella; Fiasca, Fabiana; Mazzei, Mariachiara; Necozione, Stefano; Bianchini, Valeria

    2017-01-01

    Burnout is a work-related mental health impairment, which is now recognized as a real problem in the context of the helping professions due to its adverse health outcomes on efficiency. To our knowledge, the literature on the postdisaster scenario in Italy is limited by a focus on mental health professionals rather than other health-care workers. Our cross-sectional study aims to evaluate the prevalence of burnout and psychopathological distress in different categories of health-care workers, i.e., physicians, nurses, and health-care assistants, working in different departments of L'Aquila St. Salvatore General Hospital 6 years after the 2009 earthquake in order to prevent and reduce work-related burnout. With a two-stage cluster sampling, a total of 8 departments out of a total of 28 departments were selected and the total sample included 300 health-care workers. All the participants completed the following self-reporting questionnaires: a sociodemographic data form, a Maslach Burnout Inventory and a General Health Questionnaire 12 Items (GHQ-12). Statistically significant differences emerged between the total scores of the GHQ-12: post hoc analysis showed that the total average scores of the GHQ-12 were significantly higher in doctors than in health-care assistants. A high prevalence of burnout among doctors (25.97%) emerged. Using multivariate analysis, we identified a hostile relationship with colleagues, direct exposure to the L'Aquila earthquake and moderate to high levels of distress as being burnout predictors. Investigating the prevalence of burnout and distress in health-care staff in a postdisaster setting and identifying predictors of burnout development such as stress levels, time-management skills and work-life balance will contribute to the development of preventative strategies and better organization at work with a view to improving public health efficacy and reducing public health costs, given that these workers live in the disaster

  15. Stress and Burnout in Health-Care Workers after the 2009 L’Aquila Earthquake: A Cross-Sectional Observational Study

    Directory of Open Access Journals (Sweden)

    Antonella Mattei

    2017-06-01

    Full Text Available Burnout is a work-related mental health impairment, which is now recognized as a real problem in the context of the helping professions due to its adverse health outcomes on efficiency. To our knowledge, the literature on the postdisaster scenario in Italy is limited by a focus on mental health professionals rather than other health-care workers. Our cross-sectional study aims to evaluate the prevalence of burnout and psychopathological distress in different categories of health-care workers, i.e., physicians, nurses, and health-care assistants, working in different departments of L’Aquila St. Salvatore General Hospital 6 years after the 2009 earthquake in order to prevent and reduce work-related burnout. With a two-stage cluster sampling, a total of 8 departments out of a total of 28 departments were selected and the total sample included 300 health-care workers. All the participants completed the following self-reporting questionnaires: a sociodemographic data form, a Maslach Burnout Inventory and a General Health Questionnaire 12 Items (GHQ-12. Statistically significant differences emerged between the total scores of the GHQ-12: post hoc analysis showed that the total average scores of the GHQ-12 were significantly higher in doctors than in health-care assistants. A high prevalence of burnout among doctors (25.97% emerged. Using multivariate analysis, we identified a hostile relationship with colleagues, direct exposure to the L’Aquila earthquake and moderate to high levels of distress as being burnout predictors. Investigating the prevalence of burnout and distress in health-care staff in a postdisaster setting and identifying predictors of burnout development such as stress levels, time-management skills and work-life balance will contribute to the development of preventative strategies and better organization at work with a view to improving public health efficacy and reducing public health costs, given that these workers live in the

  16. Data-driven design of fault diagnosis and fault-tolerant control systems

    CERN Document Server

    Ding, Steven X

    2014-01-01

    Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems presents basic statistical process monitoring, fault diagnosis, and control methods, and introduces advanced data-driven schemes for the design of fault diagnosis and fault-tolerant control systems catering to the needs of dynamic industrial processes. With ever increasing demands for reliability, availability and safety in technical processes and assets, process monitoring and fault-tolerance have become important issues surrounding the design of automatic control systems. This text shows the reader how, thanks to the rapid development of information technology, key techniques of data-driven and statistical process monitoring and control can now become widely used in industrial practice to address these issues. To allow for self-contained study and facilitate implementation in real applications, important mathematical and control theoretical knowledge and tools are included in this book. Major schemes are presented in algorithm form and...

  17. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    Science.gov (United States)

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  18. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2010-02-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  19. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  20. Golden eagle (Aquila chrysaetos) habitat selection as a function of land use and terrain, San Diego County, California

    Science.gov (United States)

    Tracey, Jeff A.; Madden, Melanie C.; Bloom, Peter H.; Katzner, Todd E.; Fisher, Robert N.

    2018-04-16

    Beginning in 2014, the U.S. Geological Survey, in collaboration with Bloom Biological, Inc., began telemetry research on golden eagles (Aquila chrysaetos) captured in the San Diego, Orange, and western Riverside Counties of southern California. This work was supported by the San Diego Association of Governments, California Department of Fish and Wildlife, the U.S. Fish and Wildlife Service, the Bureau of Land Management, and the U.S. Geological Survey. Since 2014, we have tracked more than 40 eagles, although this report focuses only on San Diego County eagles.An important objective of this research is to develop habitat selection models for golden eagles. Here we provide predictions of population-level habitat selection for golden eagles in San Diego County based on environmental covariates related to land use and terrain.

  1. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  2. A Design Method for Fault Reconfiguration and Fault-Tolerant Control of a Servo Motor

    Directory of Open Access Journals (Sweden)

    Jing He

    2013-01-01

    Full Text Available A design scheme that integrates fault reconfiguration and fault-tolerant position control is proposed for a nonlinear servo system with friction. Analysis of the non-linear friction torque and fault in the system is used to guide design of a sliding mode position controller. A sliding mode observer is designed to achieve fault reconfiguration based on the equivalence principle. Thus, active fault-tolerant position control of the system can be realized. A real-time simulation experiment is performed on a hardware-in-loop simulation platform. The results show that the system reconfigures well for both incipient and abrupt faults. Under the fault-tolerant control mechanism, the output signal for the system position can rapidly track given values without being influenced by faults.

  3. Active Fault Isolation in MIMO Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    isolation is based directly on the input/output s ignals applied for the fault detection. It is guaranteed that the fault group includes the fault that had occurred in the system. The second step is individual fault isolation in the fault group . Both types of isolation are obtained by applying dedicated......Active fault isolation of parametric faults in closed-loop MIMO system s are considered in this paper. The fault isolation consists of two steps. T he first step is group- wise fault isolation. Here, a group of faults is isolated from other pos sible faults in the system. The group-wise fault...

  4. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  5. Guaranteed Cost Fault-Tolerant Control for Networked Control Systems with Sensor Faults

    Directory of Open Access Journals (Sweden)

    Qixin Zhu

    2015-01-01

    Full Text Available For the large scale and complicated structure of networked control systems, time-varying sensor faults could inevitably occur when the system works in a poor environment. Guaranteed cost fault-tolerant controller for the new networked control systems with time-varying sensor faults is designed in this paper. Based on time delay of the network transmission environment, the networked control systems with sensor faults are modeled as a discrete-time system with uncertain parameters. And the model of networked control systems is related to the boundary values of the sensor faults. Moreover, using Lyapunov stability theory and linear matrix inequalities (LMI approach, the guaranteed cost fault-tolerant controller is verified to render such networked control systems asymptotically stable. Finally, simulations are included to demonstrate the theoretical results.

  6. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    Science.gov (United States)

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  7. Vipava fault (Slovenia

    Directory of Open Access Journals (Sweden)

    Ladislav Placer

    2008-06-01

    Full Text Available During mapping of the already accomplished Razdrto – Senožeče section of motorway and geologic surveying of construction operations of the trunk road between Razdrto and Vipava in northwestern part of External Dinarides on the southwestern slope of Mt. Nanos, called Rebrnice, a steep NW-SE striking fault was recognized, situated between the Predjama and the Ra{a faults. The fault was named Vipava fault after the Vipava town. An analysis of subrecent gravitational slips at Rebrnice indicates that they were probably associated with the activity of this fault. Unpublished results of a repeated levelling line along the regional road passing across the Vipava fault zone suggest its possible present activity. It would be meaningful to verify this by appropriate geodetic measurements, and to study the actual gravitational slips at Rebrnice. The association between tectonics and gravitational slips in this and in similar extreme cases in the areas of Alps and Dinarides points at the need of complex studying of geologic proceses.

  8. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun

    1994-02-01

    In this work, the Fuzzy Signed Digraph(FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators

  9. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    1994-01-01

    In this work, the Fuzzy Signed Digraph (FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators. (Author)

  10. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  11. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  12. Optimal design of superconducting fault detector for superconductor triggered fault current limiters

    International Nuclear Information System (INIS)

    Yim, S.-W.; Kim, H.-R.; Hyun, O.-B.; Sim, J.; Park, K.B.; Lee, B.W.

    2008-01-01

    We have designed and tested a superconducting fault detector (SFD) for a 22.9 kV superconductor triggered fault current limiters (STFCLs) using Au/YBCO thin films. The SFD is to detect a fault and commutate the current from the primary path to the secondary path of the STFCL. First, quench characteristics of the Au/YBCO thin films were investigated for various faults having different fault duration. The rated voltage of the Au/YBCO thin films was determined from the results, considering the stability of the Au/YBCO elements. Second, the recovery time to superconductivity after quench was measured in each fault case. In addition, the dependence of the recovery characteristics on numbers and dimension of Au/YBCO elements were investigated. Based on the results, a SFD was designed, fabricated and tested. The SFD successfully detected a fault current and carried out the line commutation. Its recovery time was confirmed to be less than 0.5 s, satisfying the reclosing scheme in the Korea Electric Power Corporation (KEPCO)'s power grid

  13. Off-fault tip splay networks: a genetic and generic property of faults indicative of their long-term propagation, and a major component of off-fault damage

    Science.gov (United States)

    Perrin, C.; Manighetti, I.; Gaudemer, Y.

    2015-12-01

    Faults grow over the long-term by accumulating displacement and lengthening, i.e., propagating laterally. We use fault maps and fault propagation evidences available in literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimeters to thousands of kilometers and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of parent fault length, slip mode, context, etc, tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (~30 and ~10 % of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). Tip splays more commonly develop on one side only of the parent fault. We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. We suggest that they represent the most recent damage off-the parent fault, formed during the most recent phase of fault lengthening. The scaling relation between parent fault length and width of tip splay network implies that damage zones enlarge as parent fault length increases. Elastic properties of host rocks might thus be modified at large distances away from a fault, up to 10% of its length. During an earthquake, a significant fraction of coseismic slip and stress is dissipated into the permanent damage zone that surrounds the causative fault. We infer that coseismic dissipation might occur away from a rupture zone as far as a distance of 10% of the length of its causative fault. Coseismic deformations and stress transfers might thus be significant in broad regions about principal rupture traces. This work has been published in Comptes Rendus Geoscience under doi:10.1016/j.crte.2015.05.002 (http://www.sciencedirect.com/science/article/pii/S1631071315000528).

  14. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    Science.gov (United States)

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    Full Text Available Recent deformation processes taking place in real time are analyzed on the basis of data on fault zones which were collected by long-term detailed geodetic survey studies with application of field methods and satellite monitoring.A new category of recent crustal movements is described and termed as parametrically induced tectonic strain in fault zones. It is shown that in the fault zones located in seismically active and aseismic regions, super intensive displacements of the crust (5 to 7 cm per year, i.e. (5 to 7·10–5 per year occur due to very small external impacts of natural or technogenic / industrial origin.The spatial discreteness of anomalous deformation processes is established along the strike of the regional Rechitsky fault in the Pripyat basin. It is concluded that recent anomalous activity of the fault zones needs to be taken into account in defining regional regularities of geodynamic processes on the basis of real-time measurements.The paper presents results of analyses of data collected by long-term (20 to 50 years geodetic surveys in highly seismically active regions of Kopetdag, Kamchatka and California. It is evidenced by instrumental geodetic measurements of recent vertical and horizontal displacements in fault zones that deformations are ‘paradoxically’ deviating from the inherited movements of the past geological periods.In terms of the recent geodynamics, the ‘paradoxes’ of high and low strain velocities are related to a reliable empirical fact of the presence of extremely high local velocities of deformations in the fault zones (about 10–5 per year and above, which take place at the background of slow regional deformations which velocities are lower by the order of 2 to 3. Very low average annual velocities of horizontal deformation are recorded in the seismic regions of Kopetdag and Kamchatka and in the San Andreas fault zone; they amount to only 3 to 5 amplitudes of the earth tidal deformations per year.A ‘fault

  16. Fault-tolerant computing systems

    International Nuclear Information System (INIS)

    Dal Cin, M.; Hohl, W.

    1991-01-01

    Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

  17. Self-sealing Faults in the Opalinus Clay - Evidence from Field Observations, Hydraulic Testing and Pore water Chemistry

    International Nuclear Information System (INIS)

    Gautschi, Andreas

    2001-01-01

    As part of the Swiss programme for high-level radioactive-waste, the National Cooperative for the Disposal of Radioactive Waste (Nagra) is currently investigating the Jurassic (Aalenian) Opalinus Clay as a potential host formation (Nagra 1988, 1994). The Opalinus Clay consists of indurated dark grey micaceous Clay-stones (shales) that are subdivided into several litho-stratigraphic units. Some of them contain thin sandy lenses, limestone concretions or siderite nodules. The clay mineral content ranges from 40-80 weight per cent (9-29% illite, 3-10% chlorite, 6-20% kaolinite and 4-22% illite/smectite mixed layers in the ratio 70/30). Other minerals are quartz (15-30%), calcite (6-40%), siderite (2-3%), ankerite (0-3%), feldspars (1-7%), pyrite (1-3%) and organic carbon (<1%). The total water content ranges from 4-19% (Mazurek 1999, Nagra 2001). Faults are mainly represented by fault gouge and fault breccias, partly associated with minor veins of calcite. A key question in safety assessment is, whether these faults may represent preferential pathways for radionuclide transport. An extensive hydrogeological data base - part of which derives from strongly tectonized geological environments - suggests that advective transport through faults in the Opalinus Clay at depth > 200 m is insignificant. This conclusion is also supported by independent evidence from clay pore water hydrochemical and isotopic data. The lack of hydrochemical anomalies and the lack of extensive mineral veining suggest that there was also no significant paleo-flow through such faults. These observations can only be reconciled with a strong self-sealing capacity of the faults. Therefore it is concluded, that reactivated existing faults or newly induced fractures will not act as pathways for significant fluid flow at anytime due to self-healing processes. These conclusions are supported by results from laboratory hydro-frac and flow-through tests, and from field-tests in the Mont Terri underground

  18. Active Fault-Tolerant Control for Wind Turbine with Simultaneous Actuator and Sensor Faults

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available The purpose of this paper is to show a novel fault-tolerant tracking control (FTC strategy with robust fault estimation and compensating for simultaneous actuator sensor faults. Based on the framework of fault-tolerant control, developing an FTC design method for wind turbines is a challenge and, thus, they can tolerate simultaneous pitch actuator and pitch sensor faults having bounded first time derivatives. The paper’s key contribution is proposing a descriptor sliding mode method, in which for establishing a novel augmented descriptor system, with which we can estimate the state of system and reconstruct fault by designing descriptor sliding mode observer, the paper introduces an auxiliary descriptor state vector composed by a system state vector, actuator fault vector, and sensor fault vector. By the optimized method of LMI, the conditions for stability that estimated error dynamics are set up to promote the determination of the parameters designed. With this estimation, and designing a fault-tolerant controller, the system’s stability can be maintained. The effectiveness of the design strategy is verified by implementing the controller in the National Renewable Energy Laboratory’s 5-MW nonlinear, high-fidelity wind turbine model (FAST and simulating it in MATLAB/Simulink.

  19. A new iterative approach for multi-objective fault detection observer design and its application to a hypersonic vehicle

    Science.gov (United States)

    Huang, Di; Duan, Zhisheng

    2018-03-01

    This paper addresses the multi-objective fault detection observer design problems for a hypersonic vehicle. Owing to the fact that parameters' variations, modelling errors and disturbances are inevitable in practical situations, system uncertainty is considered in this study. By fully utilising the orthogonal space information of output matrix, some new understandings are proposed for the construction of Lyapunov matrix. Sufficient conditions for the existence of observers to guarantee the fault sensitivity and disturbance robustness in infinite frequency domain are presented. In order to further relax the conservativeness, slack matrices are introduced to fully decouple the observer gain with the Lyapunov matrices in finite frequency range. Iterative linear matrix inequality algorithms are proposed to obtain the solutions. The simulation examples which contain a Monte Carlo campaign illustrate that the new methods can effectively reduce the design conservativeness compared with the existing methods.

  20. The San Andreas Fault and a Strike-slip Fault on Europa

    Science.gov (United States)

    1998-01-01

    The mosaic on the right of the south polar region of Jupiter's moon Europa shows the northern 290 kilometers (180 miles) of a strike-slip fault named Astypalaea Linea. The entire fault is about 810 kilometers (500 miles) long, the size of the California portion of the San Andreas fault on Earth which runs from the California-Mexico border north to the San Francisco Bay. The left mosaic shows the portion of the San Andreas fault near California's san Francisco Bay that has been scaled to the same size and resolution as the Europa image. Each covers an area approximately 170 by 193 kilometers(105 by 120 miles). The red line marks the once active central crack of the Europan fault (right) and the line of the San Andreas fault (left). A strike-slip fault is one in which two crustal blocks move horizontally past one another, similar to two opposing lanes of traffic. The overall motion along the Europan fault seems to have followed a continuous narrow crack along the entire length of the feature, with a path resembling stepson a staircase crossing zones which have been pulled apart. The images show that about 50 kilometers (30 miles) of displacement have taken place along the fault. Opposite sides of the fault can be reconstructed like a puzzle, matching the shape of the sides as well as older individual cracks and ridges that had been broken by its movements. Bends in the Europan fault have allowed the surface to be pulled apart. This pulling-apart along the fault's bends created openings through which warmer, softer ice from below Europa's brittle ice shell surface, or frozen water from a possible subsurface ocean, could reach the surface. This upwelling of material formed large areas of new ice within the boundaries of the original fault. A similar pulling apart phenomenon can be observed in the geological trough surrounding California's Salton Sea, and in Death Valley and the Dead Sea. In those cases, the pulled apart regions can include upwelled materials, but may

  1. ESR dating of the fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2005-01-01

    We carried out ESR dating of fault rocks collected near the nuclear reactor. The Upcheon fault zone is exposed close to the Ulzin nuclear reactor. The space-time pattern of fault activity on the Upcheon fault deduced from ESR dating of fault gouge can be summarised as follows : this fault zone was reactivated between fault breccia derived from Cretaceous sandstone and tertiary volcanic sedimentary rocks about 2 Ma, 1.5 Ma and 1 Ma ago. After those movements, the Upcheon fault was reactivated between Cretaceous sandstone and fault breccia zone about 800 ka ago. This fault zone was reactivated again between fault breccia derived form Cretaceous sandstone and Tertiary volcanic sedimentary rocks about 650 ka and after 125 ka ago. These data suggest that the long-term(200-500 k.y.) cyclic fault activity of the Upcheon fault zone continued into the Pleistocene. In the Ulzin area, ESR dates from the NW and EW trend faults range from 800 ka to 600 ka NE and EW trend faults were reactivated about between 200 ka and 300 ka ago. On the other hand, ESR date of the NS trend fault is about 400 ka and 50 ka. Results of this research suggest the fault activity near the Ulzin nuclear reactor fault activity continued into the Pleistocene. One ESR date near the Youngkwang nuclear reactor is 200 ka

  2. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  3. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  4. Multi-Directional Seismic Assessment of Historical Masonry Buildings by Means of Macro-Element Modelling: Application to a Building Damaged during the L’Aquila Earthquake (Italy

    Directory of Open Access Journals (Sweden)

    Francesco Cannizzaro

    2017-11-01

    Full Text Available The experience of the recent earthquakes in Italy caused a shocking impact in terms of loss of human life and damage in buildings. In particular, when it comes to ancient constructions, their cultural and historical value overlaps with the economic and social one. Among the historical structures, churches have been the object of several studies which identified the main characteristics of the seismic response and the most probable collapse mechanisms. More rarely, academic studies have been devoted to ancient palaces, since they often exhibit irregular and complicated arrangement of the resisting elements, which makes their response very difficult to predict. In this paper, a palace located in L’Aquila, severely damaged by the seismic event of 2009 is the object of an accurate study. A historical reconstruction of the past strengthening interventions as well as a detailed geometric relief is performed to implement detailed numerical models of the structure. Both global and local models are considered and static nonlinear analyses are performed considering the influence of the input direction on the seismic vulnerability of the building. The damage pattern predicted by the numerical models is compared with that observed after the earthquake. The seismic vulnerability assessments are performed in terms of ultimate peak ground acceleration (PGA using capacity curves and the Italian code spectrum. The results are compared in terms of ultimate ductility demand evaluated performing nonlinear dynamic analyses considering the actual registered seismic input of L’Aquila earthquake.

  5. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    Science.gov (United States)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  6. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  7. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    Science.gov (United States)

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. The 6 April 2009 earthquake at L'Aquila: a preliminary analysis of magnetic field measurements

    Directory of Open Access Journals (Sweden)

    U. Villante

    2010-02-01

    Full Text Available Several investigations reported the possible identification of anomalous geomagnetic field signals prior to earthquake occurrence. In the ULF frequency range, candidates for precursory signatures have been proposed in the increase in the noise background and polarization parameter (i.e. the ratio between the amplitude/power of the vertical component and that one of the horizontal component, in the changing characteristics of the slope of the power spectrum and fractal dimension, in the possible occurrence of short duration pulses. We conducted, with conventional techniques of data processing, a preliminary analysis of the magnetic field observations performed at L'Aquila during three months preceding the 6 April 2009 earthquake, focusing attention on the possible occurrence of features similar to those identified in previous events. Within the limits of this analysis, we do not find compelling evidence for any of the features which have been proposed as earthquake precursors: indeed, most of aspects of our observations (which, in some cases, appear consistent with previous findings might be interpreted in terms of the general magnetospheric conditions and/or of different sources.

  9. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    Science.gov (United States)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  10. Influence of fault steps on rupture termination of strike-slip earthquake faults

    Science.gov (United States)

    Li, Zhengfang; Zhou, Bengang

    2018-03-01

    A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.

  11. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    Science.gov (United States)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  12. Study of fault diagnosis software design for complex system based on fault tree

    International Nuclear Information System (INIS)

    Yuan Run; Li Yazhou; Wang Jianye; Hu Liqin; Wang Jiaqun; Wu Yican

    2012-01-01

    Complex systems always have high-level reliability and safety requirements, and same does their diagnosis work. As a great deal of fault tree models have been acquired during the design and operation phases, a fault diagnosis method which combines fault tree analysis with knowledge-based technology has been proposed. The prototype of fault diagnosis software has been realized and applied to mobile LIDAR system. (authors)

  13. Illite authigenesis during faulting and fluid flow - a microstructural study of fault rocks

    Science.gov (United States)

    Scheiber, Thomas; Viola, Giulio; van der Lelij, Roelant; Margreth, Annina

    2017-04-01

    Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro-domains were physically separated by

  14. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    Science.gov (United States)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  15. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2003-02-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene

  16. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2003-02-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene.

  17. Fault-tolerant architecture: Evaluation methodology

    International Nuclear Information System (INIS)

    Battle, R.E.; Kisner, R.A.

    1992-08-01

    The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems

  18. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    Science.gov (United States)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

  19. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  20. Aftershocks illuminate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    Science.gov (United States)

    Horton, J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  1. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    Science.gov (United States)

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  2. Misbheaving Faults: The Expanding Role of Geodetic Imaging in Unraveling Unexpected Fault Slip Behavior

    Science.gov (United States)

    Barnhart, W. D.; Briggs, R.

    2015-12-01

    Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

  3. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    Science.gov (United States)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  4. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  5. Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea

    Science.gov (United States)

    Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham

    2011-01-01

    The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

  6. Fault Detection for Industrial Processes

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    2012-01-01

    Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

  7. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  8. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    Science.gov (United States)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  9. Active faults, paleoseismology, and historical fault rupture in northern Wairarapa, North Island, New Zealand

    International Nuclear Information System (INIS)

    Schermer, E.R.; Van Dissen, R.; Berryman, K.R.; Kelsey, H.M.; Cashman, S.M.

    2004-01-01

    Active faulting in the upper plate of the Hikurangi subduction zone, North Island, New Zealand, represents a significant seismic hazard that is not yet well understood. In northern Wairarapa, the geometry and kinematics of active faults, and the Quaternary and historical surface-rupture record, have not previously been studied in detail. We present the results of mapping and paleoseismicity studies on faults in the northern Wairarapa region to document the characteristics of active faults and the timing of earthquakes. We focus on evidence for surface rupture in the 1855 Wairarapa (M w 8.2) and 1934 Pahiatua (M w 7.4) earthquakes, two of New Zealand's largest historical earthquakes. The Dreyers Rock, Alfredton, Saunders Road, Waitawhiti, and Waipukaka faults form a northeast-trending, east-stepping array of faults. Detailed mapping of offset geomorphic features shows the rupture lengths vary from c. 7 to 20 km and single-event displacements range from 3 to 7 m, suggesting the faults are capable of generating M >7 earthquakes. Trenching results show that two earthquakes have occurred on the Alfredton Fault since c. 2900 cal. BP. The most recent event probably occurred during the 1855 Wairarapa earthquake as slip propagated northward from the Wairarapa Fault and across a 6 km wide step. Waipukaka Fault trenches show that at least three surface-rupturing earthquakes have occurred since 8290-7880 cal. BP. Analysis of stratigraphic and historical evidence suggests the most recent rupture occurred during the 1934 Pahiatua earthquake. Estimates of slip rates provided by these data suggest that a larger component of strike slip than previously suspected is occurring within the upper plate and that the faults accommodate a significant proportion of the dextral component of oblique subduction. Assessment of seismic hazard is difficult because the known fault scarp lengths appear too short to have accommodated the estimated single-event displacements. Faults in the region are

  10. Fault prediction for nonlinear stochastic system with incipient faults based on particle filter and nonlinear regression.

    Science.gov (United States)

    Ding, Bo; Fang, Huajing

    2017-05-01

    This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    Science.gov (United States)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  12. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  13. Absolute age determination of quaternary faults

    International Nuclear Information System (INIS)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik

    2000-03-01

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results

  14. Absolute age determination of quaternary faults

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik [Korea Basic Science Institute, Seoul (Korea, Republic of)] (and others)

    2000-03-15

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results.

  15. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2002-03-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene

  16. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2002-03-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene.

  17. Spatiotemporal patterns of fault slip rates across the Central Sierra Nevada frontal fault zone

    Science.gov (United States)

    Rood, Dylan H.; Burbank, Douglas W.; Finkel, Robert C.

    2011-01-01

    Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection

  18. Geophysical Imaging of Fault Structures Over the Qadimah Fault, Saudi Arabia

    KAUST Repository

    AlTawash, Feras

    2011-06-01

    The purpose of this study is to use geophysical imaging methods to identify the conjectured location of the ‘Qadimah fault’ near the ‘King Abdullah Economic City’, Saudi Arabia. Towards this goal, 2-D resistivity and seismic surveys were conducted at two different locations, site 1 and site 2, along the proposed trace of the ‘Qadimah fault’. Three processing techniques were used to validate the fault (i) 2-D travel time tomography, (ii) resistivity imaging, and (iii) reflection trim stacking. The refraction traveltime tomograms at site 1 and site 2 both show low-velocity zones (LVZ’s) next to the conjectured fault trace. These LVZ’s are interpreted as colluvial wedges that are often observed on the downthrown side of normal faults. The resistivity tomograms are consistent with this interpretation in that there is a significant change in resistivity values along the conjectured fault trace. Processing the reflection data did not clearly reveal the existence of a fault, and is partly due to the sub-optimal design of the reflection experiment. Overall, the results of this study strongly, but not definitively, suggest the existence of the Qadimah fault in the ‘King Abdullah Economic City’ region of Saudi Arabia.

  19. The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic

    Science.gov (United States)

    Armstrong, Curtis D.; Humphreys, William M.

    2003-01-01

    We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.

  20. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  1. Fault isolatability conditions for linear systems

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Henrik

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...... the faults have occurred. The last step is a fault isolation (FI) of the faults occurring in a specific fault set, i.e. equivalent with the standard FI step. A simple example demonstrates how to turn the algebraic necessary and sufficient conditions into explicit algorithms for designing filter banks, which...

  2. Biotelemetery data for golden eagles (Aquila chrysaetos) captured in coastal southern California, February 2016–February 2017

    Science.gov (United States)

    Tracey, Jeff A.; Madden, Melanie C.; Sebes, Jeremy B.; Bloom, Peter H.; Katzner, Todd E.; Fisher, Robert N.

    2017-05-12

    Because of a lack of clarity about the status of golden eagles (Aquila chrysaetos) in coastal southern California, the USGS, in collaboration with local, State, and other Federal agencies, began a multi-year survey and tracking program of golden eagles to address questions regarding habitat use, movement behavior, nest occupancy, genetic population structure, and human impacts on eagles. Golden eagle trapping and tracking efforts began in September 2014. During trapping efforts from September 29, 2014, to February 23, 2016, 27 golden eagles were captured. During trapping efforts from February 24, 2016, to February 23, 2017, an additional 10 golden eagles (7 females and 3 males) were captured in San Diego, Orange, and western Riverside Counties. Biotelemetry data for 26 of the 37 golden eagles that were transmitting data from February 24, 2016, to February 23, 2017 are presented. These eagles ranged as far north as northern Nevada and southern Wyoming, and as far south as La Paz, Baja California, Mexico.

  3. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    Science.gov (United States)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

  4. Stress sensitivity of fault seismicity: A comparison between limited-offset oblique and major strike-slip faults

    Science.gov (United States)

    Parsons, T.; Stein, R.S.; Simpson, R.W.; Reasenberg, P.A.

    1999-01-01

    We present a new three-dimensional inventory of the southern San Francisco Bay area faults and use it to calculate stress applied principally by the 1989 M = 7.1 Loma Prieta earthquake and to compare fault seismicity rates before and after 1989. The major high-angle right-lateral faults exhibit a different response to the stress change than do minor oblique (right-lateral/thrust) faults. Seismicity on oblique-slip faults in the southern Santa Clara Valley thrust belt increased where the faults were unclamped. The strong dependence of seismicity change on normal stress change implies a high coefficient of static friction. In contrast, we observe that faults with significant offset (>50-100 km) behave differently; microseismicity on the Hayward fault diminished where right-lateral shear stress was reduced and where it was unclamped by the Loma Prieta earthquake. We observe a similar response on the San Andreas fault zone in southern California after the Landers earthquake sequence. Additionally, the offshore San Gregorio fault shows a seismicity rate increase where right-lateral/oblique shear stress was increased by the Loma Prieta earthquake despite also being clamped by it. These responses are consistent with either a low coefficient of static friction or high pore fluid pressures within the fault zones. We can explain the different behavior of the two styles of faults if those with large cumulative offset become impermeable through gouge buildup; coseismically pressurized pore fluids could be trapped and negate imposed normal stress changes, whereas in more limited offset faults, fluids could rapidly escape. The difference in behavior between minor and major faults may explain why frictional failure criteria that apply intermediate coefficients of static friction can be effective in describing the broad distributions of aftershocks that follow large earthquakes, since many of these events occur both inside and outside major fault zones.

  5. Performance based fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2002-01-01

    Different aspects of fault detection and fault isolation in closed-loop systems are considered. It is shown that using the standard setup known from feedback control, it is possible to formulate fault diagnosis problems based on a performance index in this general standard setup. It is also shown...

  6. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  7. Characterization of leaky faults

    International Nuclear Information System (INIS)

    Shan, Chao.

    1990-05-01

    Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs

  8. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

    International Nuclear Information System (INIS)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C.

    2004-01-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis

  9. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  10. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    Science.gov (United States)

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  11. The morphology of strike-slip faults - Examples from the San Andreas Fault, California

    Science.gov (United States)

    Bilham, Roger; King, Geoffrey

    1989-01-01

    The dilatational strains associated with vertical faults embedded in a horizontal plate are examined in the framework of fault kinematics and simple displacement boundary conditions. Using boundary element methods, a sequence of examples of dilatational strain fields associated with commonly occurring strike-slip fault zone features (bends, offsets, finite rupture lengths, and nonuniform slip distributions) is derived. The combinations of these strain fields are then used to examine the Parkfield region of the San Andreas fault system in central California.

  12. Iowa Bedrock Faults

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This fault coverage locates and identifies all currently known/interpreted fault zones in Iowa, that demonstrate offset of geologic units in exposure or subsurface...

  13. H infinity Integrated Fault Estimation and Fault Tolerant Control of Discrete-time Piecewise Linear Systems

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Bak, Thomas

    2012-01-01

    In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then, the es...

  14. FSN-based fault modelling for fault detection and troubleshooting in CANDU stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, E., E-mail: elnara.nasimi@brucepower.com [Bruce Power LLP., Tiverton, Ontario(Canada); Gabbar, H.A. [Univ. of Ontario Inst. of Tech., Oshawa, Ontario (Canada)

    2013-07-01

    An accurate fault modeling and troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities of current and future generation of CANDU designs. This paper presents fault modeling approach using Fault Semantic Network (FSN) methodology with risk estimation. Its application is demonstrated using a case study of Bruce B zone-control level oscillations. (author)

  15. HBIM for restoration projects: case-study on San Cipriano Church in Castelvecchio Calvisio, Province of L’Aquila, Italy.

    Directory of Open Access Journals (Sweden)

    Romolo Continenza

    2016-06-01

    Full Text Available Although there have been significant developments in research into assigning semantic content to 3D models for the purposes of documentation, conservation and architectural and archaeological heritage management, the application of 3D GIS to individual artifacts has remained rare. Where 3D GIS has been used in this context, it has not been done in a consistent or standardised way.As an alternative to the elaborate construction of 3D GIS, the international academic community has embarked on a process of investigating how HBIM (Historical BIM might be used in the fields of historical architecture and archaeology.In this paper, we report on experiments carried out at the San Cipriano Church in Castelvecchio Calvisio, Province of L’Aquila, Italy, on the basis of the integrated survey of the church, before turning to a discussion of the planning of restoration work in a BIM environment.

  16. Ulna de Aquila chrysaetos hallada en un entierro ceremonial del periodo Formativo Medio en Mascota, Jalisco, México

    Directory of Open Access Journals (Sweden)

    Fabio Germán Cupul-Magaña

    2017-04-01

    Full Text Available La identificación y el análisis de los restos de aves de los sitios arqueológicos pueden proporcionar información sobre qué significaron y cómo fueron usados. En el México prehispánico las aves sirvieron como alimento, materia prima para la elaboración de herramientas y en rituales religiosos. En esta nota comentamos el hallazgo de la ulna izquierda de un águila real adulta, Aquila chrysaetos, en el yacimiento arqueológico de Los Tanques (ca. 800 a.C. en Mascota, Jalisco, México. La ulna  se encontró dentro del bulto bien envuelto del entierro de un hombre joven de entre 19 y 25 años de edad. Su presencia en el entierro indica el alto estatus social del individuo y es parte de un código ritual mortuorio.

  17. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  18. Fault detection in finite frequency domain for Takagi-Sugeno fuzzy systems with sensor faults.

    Science.gov (United States)

    Li, Xiao-Jian; Yang, Guang-Hong

    2014-08-01

    This paper is concerned with the fault detection (FD) problem in finite frequency domain for continuous-time Takagi-Sugeno fuzzy systems with sensor faults. Some finite-frequency performance indices are initially introduced to measure the fault/reference input sensitivity and disturbance robustness. Based on these performance indices, an effective FD scheme is then presented such that the generated residual is designed to be sensitive to both fault and reference input for faulty cases, while robust against the reference input for fault-free case. As the additional reference input sensitivity for faulty cases is considered, it is shown that the proposed method improves the existing FD techniques and achieves a better FD performance. The theory is supported by simulation results related to the detection of sensor faults in a tunnel-diode circuit.

  19. Fault detection for discrete-time switched systems with sensor stuck faults and servo inputs.

    Science.gov (United States)

    Zhong, Guang-Xin; Yang, Guang-Hong

    2015-09-01

    This paper addresses the fault detection problem of switched systems with servo inputs and sensor stuck faults. The attention is focused on designing a switching law and its associated fault detection filters (FDFs). The proposed switching law uses only the current states of FDFs, which guarantees the residuals are sensitive to the servo inputs with known frequency ranges in faulty cases and robust against them in fault-free case. Thus, the arbitrarily small sensor stuck faults, including outage faults can be detected in finite-frequency domain. The levels of sensitivity and robustness are measured in terms of the finite-frequency H- index and l2-gain. Finally, the switching law and FDFs are obtained by the solution of a convex optimization problem. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Late quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    International Nuclear Information System (INIS)

    Brogan, G.E.; Kellogg, K.S.; Terhune, C.L.; Slemmons, D.B.

    1991-01-01

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest- trending pull-apart basin

  1. Characterization of the San Andreas Fault near Parkfield, California by fault-zone trapped waves

    Science.gov (United States)

    Li, Y.; Vidale, J.; Cochran, E.

    2003-04-01

    In October, 2002, coordinated by the Pre-EarthScope/SAFOD, we conducted an extensive seismic experiment at the San Andreas fault (SAF), Parkfield to record fault-zone trapped waves generated by explosions and microearthquakes using dense linear seismic arrays of 52 PASSCAL 3-channel REFTEKs deployed across and along the fault zone. We detonated 3 explosions within and out of the fault zone during the experiment, and also recorded other 13 shots of PASO experiment of UWM/RPI (Thurber and Roecker) detonated around the SAFOD drilling site at the same time. We observed prominent fault-zone trapped waves with large amplitudes and long duration following S waves at stations close to the main fault trace for sources located within and close to the fault zone. Dominant frequencies of trapped waves are 2-3 Hz for near-surface explosions and 4-5 Hz for microearthquakes. Fault-zone trapped waves are relatively weak on the north strand of SAF for same sources. In contrast, seismograms registered for both the stations and shots far away from the fault zone show a brief S wave and lack of trapped waves. These observations are consistent with previous findings of fault-zone trapped waves at the SAF [Li et al., 1990; 1997], indicating the existence of a well-developed low-velocity waveguide along the main fault strand (principal slip plan) of the SAF. The data from denser arrays and 3-D finite-difference simulations of fault-zone trapped waves allowed us to delineate the internal structure, segmentation and physical properties of the SAF with higher resolution. The trapped-wave inferred waveguide on the SAF Parkfield segment is ~150 m wide at surface and tapers to ~100 m at seismogenic depth, in which Q is 20-50 and S velocities are reduced by 30-40% from wall-rock velocities, with the greater velocity reduction at the shallow depth and to southeast of the 1966 M6 epicenter. We interpret this low-velocity waveguide on the SAF main strand as being the remnant of damage zone caused

  2. 22 CFR 17.3 - Fault.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the individual...

  3. Fault Diagnosis and Fault-tolerant Control of Modular Multi-level Converter High-voltage DC System

    DEFF Research Database (Denmark)

    Liu, Hui; Ma, Ke; Wang, Chao

    2016-01-01

    of failures and lower the reliability of the MMC-HVDC system. Therefore, research on the fault diagnosis and fault-tolerant control of MMC-HVDC system is of great significance in order to enhance the reliability of the system. This paper provides a comprehensive review of fault diagnosis and fault handling...

  4. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  5. Network Fault Diagnosis Using DSM

    Institute of Scientific and Technical Information of China (English)

    Jiang Hao; Yan Pu-liu; Chen Xiao; Wu Jing

    2004-01-01

    Difference similitude matrix (DSM) is effective in reducing information system with its higher reduction rate and higher validity. We use DSM method to analyze the fault data of computer networks and obtain the fault diagnosis rules. Through discretizing the relative value of fault data, we get the information system of the fault data. DSM method reduces the information system and gets the diagnosis rules. The simulation with the actual scenario shows that the fault diagnosis based on DSM can obtain few and effective rules.

  6. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  7. Fault tolerant control with torque limitation based on fault mode for ten-phase permanent magnet synchronous motor

    Directory of Open Access Journals (Sweden)

    Guo Hong

    2015-10-01

    Full Text Available This paper proposes a novel fault tolerant control with torque limitation based on the fault mode for the ten-phase permanent magnet synchronous motor (PMSM under various open-circuit and short-circuit fault conditions, which includes the optimal torque control and the torque limitation control based on the fault mode. The optimal torque control is adopted to guarantee the ripple-free electromagnetic torque operation for the ten-phase motor system under the post-fault condition. Furthermore, we systematically analyze the load capacity of the ten-phase motor system under different fault modes. And a torque limitation control approach based on the fault mode is proposed, which was not available earlier. This approach is able to ensure the safety operation of the faulted motor system in long operating time without causing the overheat fault. The simulation result confirms that the proposed fault tolerant control for the ten-phase motor system is able to guarantee the ripple-free electromagnetic torque and the safety operation in long operating time under the normal and fault conditions.

  8. STATUS BAKU MUTU AIR LAUT PERAIRAN TELUK AMBON LUAR UNTUK WISATA BAHARI KAPAL TENGGELAM SS AQUILA

    Directory of Open Access Journals (Sweden)

    Guntur Adhi Rahmawan

    2017-09-01

    Full Text Available Ambon Bay waters consist of two parts, Inner Ambon Bay and Outer Ambon Bay separated by a gap that is narrow and shallow. Ambon Bay has a lot of functionality and usability both in transportation, conservation, and tourism. The existence of one of the sites SS. Aquila sinking ship that sank since May 27, 1958, became one of the tourist attraction diving in Ambon Bay. Determination of water pollution index Ambon Bay becomes very important to do as support material and development of sea travel. Determining pollution index is done by direct measurement using the sea water quality parameters Water Quality Checker (DKK TOA WQC Type-24, as well as laboratory analysis to determine the chemical parameters of seawater (pH, TSS, salinity, turbidity, oil, grease. The results showed that the waters of the Bay of Ambon Affairs based on some parameters water quality standard for marine tourism is still included in accordance with the standard criteria by Keputusan Menteri Negara Lingkungan Hidup Nomor: 51 Tahun 2004 on Guidelines for Determination of Water Quality Status.

  9. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  10. How is tectonic slip partitioned from the Alpine Fault to the Marlborough Fault System? : results from the Hope Fault

    International Nuclear Information System (INIS)

    Langridge, R.M.

    2004-01-01

    This report contains data from research undertaken by the author on the Hope Fault from 2000-2004. This report provides an opportunity to include data that was additional to or newer than work that was published in other places. New results from studies along the Hurunui section of the Hope Fault, additional to that published in Langridge and Berryman (2005) are presented here. This data includes tabulated data of fault location and description measurements, a graphical representation of this data in diagrammatic form along the length of the fault and new radiocarbon dates from the current EQC funded project. The new data show that the Hurunui section of the Hope Fault has the capability to yield further data on fault slip rate, earthquake displacements, and paleoseismicity. New results from studies at the Greenburn Stream paleoseismic site additional to that published in Langridge et al. (2003) are presented here. This includes a new log of the deepened west wall of Trench 2, a log of the west wall of Trench 1, and new radiocarbon dates from the second phase of dating undertaken at the Greenburn Stream site. The new data show that this site has the capability to yield further data on the paleoseismicity of the Conway segment of the Hope Fault. Through a detailed analysis of all three logged walls at the site and the new radiocarbon dates, it may, in combination with data from the nearby Clarence Reserve site of Pope (1994), be possible to develop a good record of the last 5 events on the Conway segment. (author). 12 refs., 12 figs

  11. Diagnosis and Fault-tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Kinnaert, Michel; Lunze, Jan

    the applicability of the presented methods. The theoretical results are illustrated by two running examples which are used throughout the book. The book addresses engineering students, engineers in industry and researchers who wish to get a survey over the variety of approaches to process diagnosis and fault......The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process...

  12. Secondary Fault Activity of the North Anatolian Fault near Avcilar, Southwest of Istanbul: Evidence from SAR Interferometry Observations

    Directory of Open Access Journals (Sweden)

    Faqi Diao

    2016-10-01

    Full Text Available Strike-slip faults may be traced along thousands of kilometers, e.g., the San Andreas Fault (USA or the North Anatolian Fault (Turkey. A closer look at such continental-scale strike faults reveals localized complexities in fault geometry, associated with fault segmentation, secondary faults and a change of related hazards. The North Anatolian Fault displays such complexities nearby the mega city Istanbul, which is a place where earthquake risks are high, but secondary processes are not well understood. In this paper, long-term persistent scatterer interferometry (PSI analysis of synthetic aperture radar (SAR data time series was used to precisely identify the surface deformation pattern associated with the faulting complexity at the prominent bend of the North Anatolian Fault near Istanbul city. We elaborate the relevance of local faulting activity and estimate the fault status (slip rate and locking depth for the first time using satellite SAR interferometry (InSAR technology. The studied NW-SE-oriented fault on land is subject to strike-slip movement at a mean slip rate of ~5.0 mm/year and a shallow locking depth of <1.0 km and thought to be directly interacting with the main fault branch, with important implications for tectonic coupling. Our results provide the first geodetic evidence on the segmentation of a major crustal fault with a structural complexity and associated multi-hazards near the inhabited regions of Istanbul, with similarities also to other major strike-slip faults that display changes in fault traces and mechanisms.

  13. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  14. 31 CFR 29.522 - Fault.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at fault...

  15. Wilshire fault: Earthquakes in Hollywood?

    Science.gov (United States)

    Hummon, Cheryl; Schneider, Craig L.; Yeats, Robert S.; Dolan, James F.; Sieh, Kerry E.; Huftile, Gary J.

    1994-04-01

    The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

  16. Heterogeneity in the Fault Damage Zone: a Field Study on the Borrego Fault, B.C., Mexico

    Science.gov (United States)

    Ostermeijer, G.; Mitchell, T. M.; Dorsey, M. T.; Browning, J.; Rockwell, T. K.; Aben, F. M.; Fletcher, J. M.; Brantut, N.

    2017-12-01

    The nature and distribution of damage around faults, and its impacts on fault zone properties has been a hot topic of research over the past decade. Understanding the mechanisms that control the formation of off fault damage can shed light on the processes during the seismic cycle, and the nature of fault zone development. Recent published work has identified three broad zones of damage around most faults based on the type, intensity, and extent of fracturing; Tip, Wall, and Linking damage. Although these zones are able to adequately characterise the general distribution of damage, little has been done to identify the nature of damage heterogeneity within those zones, often simplifying the distribution to fit log-normal linear decay trends. Here, we attempt to characterise the distribution of fractures that make up the wall damage around seismogenic faults. To do so, we investigate an extensive two dimensional fracture network exposed on a river cut platform along the Borrego Fault, BC, Mexico, 5m wide, and extending 20m from the fault core into the damage zone. High resolution fracture mapping of the outcrop, covering scales ranging three orders of magnitude (cm to m), has allowed for detailed observations of the 2D damage distribution within the fault damage zone. Damage profiles were obtained along several 1D transects perpendicular to the fault and micro-damage was examined from thin-sections at various locations around the outcrop for comparison. Analysis of the resulting fracture network indicates heterogeneities in damage intensity at decimetre scales resulting from a patchy distribution of high and low intensity corridors and clusters. Such patchiness may contribute to inconsistencies in damage zone widths defined along 1D transects and the observed variability of fracture densities around decay trends. How this distribution develops with fault maturity and the scaling of heterogeneities above and below the observed range will likely play a key role in

  17. Remote triggering of fault-strength changes on the San Andreas fault at Parkfield.

    Science.gov (United States)

    Taira, Taka'aki; Silver, Paul G; Niu, Fenglin; Nadeau, Robert M

    2009-10-01

    Fault strength is a fundamental property of seismogenic zones, and its temporal changes can increase or decrease the likelihood of failure and the ultimate triggering of seismic events. Although changes in fault strength have been suggested to explain various phenomena, such as the remote triggering of seismicity, there has been no means of actually monitoring this important property in situ. Here we argue that approximately 20 years of observation (1987-2008) of the Parkfield area at the San Andreas fault have revealed a means of monitoring fault strength. We have identified two occasions where long-term changes in fault strength have been most probably induced remotely by large seismic events, namely the 2004 magnitude (M) 9.1 Sumatra-Andaman earthquake and the earlier 1992 M = 7.3 Landers earthquake. In both cases, the change possessed two manifestations: temporal variations in the properties of seismic scatterers-probably reflecting the stress-induced migration of fluids-and systematic temporal variations in the characteristics of repeating-earthquake sequences that are most consistent with changes in fault strength. In the case of the 1992 Landers earthquake, a period of reduced strength probably triggered the 1993 Parkfield aseismic transient as well as the accompanying cluster of four M > 4 earthquakes at Parkfield. The fault-strength changes produced by the distant 2004 Sumatra-Andaman earthquake are especially important, as they suggest that the very largest earthquakes may have a global influence on the strength of the Earth's fault systems. As such a perturbation would bring many fault zones closer to failure, it should lead to temporal clustering of global seismicity. This hypothesis seems to be supported by the unusually high number of M >or= 8 earthquakes occurring in the few years following the 2004 Sumatra-Andaman earthquake.

  18. Solar system fault detection

    Science.gov (United States)

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  19. Development of direct dating methods of fault gouges: Deep drilling into Nojima Fault, Japan

    Science.gov (United States)

    Miyawaki, M.; Uchida, J. I.; Satsukawa, T.

    2017-12-01

    It is crucial to develop a direct dating method of fault gouges for the assessment of recent fault activity in terms of site evaluation for nuclear power plants. This method would be useful in regions without Late Pleistocene overlying sediments. In order to estimate the age of the latest fault slip event, it is necessary to use fault gouges which have experienced high frictional heating sufficient for age resetting. It is said that frictional heating is higher in deeper depths, because frictional heating generated by fault movement is determined depending on the shear stress. Therefore, we should determine the reliable depth of age resetting, as it is likely that fault gouges from the ground surface have been dated to be older than the actual age of the latest fault movement due to incomplete resetting. In this project, we target the Nojima fault which triggered the 1995 Kobe earthquake in Japan. Samples are collected from various depths (300-1,500m) by trenching and drilling to investigate age resetting conditions and depth using several methods including electron spin resonance (ESR) and optical stimulated luminescence (OSL), which are applicable to ages later than the Late Pleistocene. The preliminary results by the ESR method show approx. 1.1 Ma1) at the ground surface and 0.15-0.28 Ma2) at 388 m depth, respectively. These results indicate that samples from deeper depths preserve a younger age. In contrast, the OSL method dated approx. 2,200 yr1) at the ground surface. Although further consideration is still needed as there is a large margin of error, this result indicates that the age resetting depth of OSL is relatively shallow due to the high thermosensitivity of OSL compare to ESR. In the future, we plan to carry out further investigation for dating fault gouges from various depths up to approx. 1,500 m to verify the use of these direct dating methods.1) Kyoto University, 2017. FY27 Commissioned for the disaster presentation on nuclear facilities (Drilling

  20. Lessons from the conviction of the L'Aquila seven: The standard probabilistic earthquake hazard and risk assessment is ineffective

    Science.gov (United States)

    Wyss, Max

    2013-04-01

    An earthquake of M6.3 killed 309 people in L'Aquila, Italy, on 6 April 2011. Subsequently, a judge in L'Aquila convicted seven who had participated in an emergency meeting on March 30, assessing the probability of a major event to follow the ongoing earthquake swarm. The sentence was six years in prison, a combine fine of 2 million Euros, loss of job, loss of retirement rent, and lawyer's costs. The judge followed the prosecution's accusation that the review by the Commission of Great Risks had conveyed a false sense of security to the population, which consequently did not take their usual precautionary measures before the deadly earthquake. He did not consider the facts that (1) one of the convicted was not a member of the commission and had merrily obeyed orders to bring the latest seismological facts to the discussion, (2) another was an engineer who was not required to have any expertise regarding the probability of earthquakes, (3) and two others were seismologists not invited to speak to the public at a TV interview and a press conference. This exaggerated judgment was the consequence of an uproar in the population, who felt misinformed and even mislead. Faced with a population worried by an earthquake swarm, the head of the Italian Civil Defense is on record ordering that the population be calmed, and the vice head executed this order in a TV interview one hour before the meeting of the Commission by stating "the scientific community continues to tell me that the situation is favorable and that there is a discharge of energy." The first lesson to be learned is that communications to the public about earthquake hazard and risk must not be left in the hands of someone who has gross misunderstandings about seismology. They must be carefully prepared by experts. The more significant lesson is that the approach to calm the population and the standard probabilistic hazard and risk assessment, as practiced by GSHAP, are misleading. The later has been criticized as

  1. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  2. Mesoscopic Structural Observations of Cores from the Chelungpu Fault System, Taiwan Chelungpu-Fault Drilling Project Hole-A, Taiwan

    Directory of Open Access Journals (Sweden)

    Hiroki Sone

    2007-01-01

    Full Text Available Structural characteristics of fault rocks distributed within major fault zones provide basic information in understanding the physical aspects of faulting. Mesoscopic structural observations of the drilledcores from Taiwan Chelungpu-fault Drilling Project Hole-A are reported in this article to describe and reveal the distribution of fault rocks within the Chelungpu Fault System.

  3. The genome sequence of a widespread apex predator, the golden eagle (Aquila chrysaetos.

    Directory of Open Access Journals (Sweden)

    Jacqueline M Doyle

    Full Text Available Biologists routinely use molecular markers to identify conservation units, to quantify genetic connectivity, to estimate population sizes, and to identify targets of selection. Many imperiled eagle populations require such efforts and would benefit from enhanced genomic resources. We sequenced, assembled, and annotated the first eagle genome using DNA from a male golden eagle (Aquila chrysaetos captured in western North America. We constructed genomic libraries that were sequenced using Illumina technology and assembled the high-quality data to a depth of ∼40x coverage. The genome assembly includes 2,552 scaffolds >10 Kb and 415 scaffolds >1.2 Mb. We annotated 16,571 genes that are involved in myriad biological processes, including such disparate traits as beak formation and color vision. We also identified repetitive regions spanning 92 Mb (∼6% of the assembly, including LINES, SINES, LTR-RTs and DNA transposons. The mitochondrial genome encompasses 17,332 bp and is ∼91% identical to the Mountain Hawk-Eagle (Nisaetus nipalensis. Finally, the data reveal that several anonymous microsatellites commonly used for population studies are embedded within protein-coding genes and thus may not have evolved in a neutral fashion. Because the genome sequence includes ∼800,000 novel polymorphisms, markers can now be chosen based on their proximity to functional genes involved in migration, carnivory, and other biological processes.

  4. Passive Fault-tolerant Control of Discrete-time Piecewise Affine Systems against Actuator Faults

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Izadi-Zamanabadi, Roozbeh; Bak, Thomas

    2012-01-01

    In this paper, we propose a new method for passive fault-tolerant control of discrete time piecewise affine systems. Actuator faults are considered. A reliable piecewise linear quadratic regulator (LQR) state feedback is designed such that it can tolerate actuator faults. A sufficient condition f...... is illustrated on a numerical example and a two degree of freedom helicopter....

  5. Fault rocks and uranium mineralization

    International Nuclear Information System (INIS)

    Tong Hangshou.

    1991-01-01

    The types of fault rocks, microstructural characteristics of fault tectonite and their relationship with uranium mineralization in the uranium-productive granite area are discussed. According to the synthetic analysis on nature of stress, extent of crack and microstructural characteristics of fault rocks, they can be classified into five groups and sixteen subgroups. The author especially emphasizes the control of cataclasite group and fault breccia group over uranium mineralization in the uranium-productive granite area. It is considered that more effective study should be made on the macrostructure and microstructure of fault rocks. It is of an important practical significance in uranium exploration

  6. ESR dating of the fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2004-01-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs, grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Ulzin nuclear reactor. ESR signals of quartz grains separated from fault rocks collected from the E-W trend fault are saturated. This indicates that the last movement of these faults had occurred before the quaternary period. ESR dates from the NW trend faults range from 300ka to 700ka. On the other hand, ESR date of the NS trend fault is about 50ka. Results of this research suggest that long-term cyclic fault activity near the Ulzin nuclear reactor continued into the pleistocene.

  7. Identification of active fault using analysis of derivatives with vertical second based on gravity anomaly data (Case study: Seulimeum fault in Sumatera fault system)

    Science.gov (United States)

    Hududillah, Teuku Hafid; Simanjuntak, Andrean V. H.; Husni, Muhammad

    2017-07-01

    Gravity is a non-destructive geophysical technique that has numerous application in engineering and environmental field like locating a fault zone. The purpose of this study is to spot the Seulimeum fault system in Iejue, Aceh Besar (Indonesia) by using a gravity technique and correlate the result with geologic map and conjointly to grasp a trend pattern of fault system. An estimation of subsurface geological structure of Seulimeum fault has been done by using gravity field anomaly data. Gravity anomaly data which used in this study is from Topex that is processed up to Free Air Correction. The step in the Next data processing is applying Bouger correction and Terrin Correction to obtain complete Bouger anomaly that is topographically dependent. Subsurface modeling is done using the Gav2DC for windows software. The result showed a low residual gravity value at a north half compared to south a part of study space that indicated a pattern of fault zone. Gravity residual was successfully correlate with the geologic map that show the existence of the Seulimeum fault in this study space. The study of earthquake records can be used for differentiating the active and non active fault elements, this gives an indication that the delineated fault elements are active.

  8. Integrated system fault diagnostics utilising digraph and fault tree-based approaches

    International Nuclear Information System (INIS)

    Bartlett, L.M.; Hurdle, E.E.; Kelly, E.M.

    2009-01-01

    With the growing intolerance to failures within systems, the issue of fault diagnosis has become ever prevalent. Information concerning these possible failures can help to minimise the disruption to the functionality of the system by allowing quick rectification. Traditional approaches to fault diagnosis within engineering systems have focused on sequential testing procedures and real-time mechanisms. Both methods have been predominantly limited to single fault causes. Latest approaches also consider the issue of multiple faults in reflection to the characteristics of modern day systems designed for high reliability. In addition, a diagnostic capability is required in real time and for changeable system functionality. This paper focuses on two approaches which have been developed to cater for the demands of diagnosis within current engineering systems, namely application of the fault tree analysis technique and the method of digraphs. Both use a comparative approach to consider differences between actual system behaviour and that expected. The procedural guidelines are discussed for each method, with an experimental aircraft fuel system used to test and demonstrate the features of the techniques. The effectiveness of the approaches is compared and their future potential highlighted

  9. Research of fault activity in Japan

    International Nuclear Information System (INIS)

    Nohara, T.; Nakatsuka, N.; Takeda, S.

    2004-01-01

    Six hundreds and eighty earthquakes causing significant damage have been recorded since the 7. century in Japan. It is important to recognize faults that will or are expected to be active in future in order to help reduce earthquake damage, estimate earthquake damage insurance and siting of nuclear facilities. Such faults are called 'active faults' in Japan, the definition of which is a fault that has moved intermittently for at least several hundred thousand years and is expected to continue to do so in future. Scientific research of active faults has been ongoing since the 1930's. Many results indicated that major earthquakes and fault movements in shallow crustal regions in Japan occurred repeatedly at existing active fault zones during the past. After the 1995 Southern Hyogo Prefecture Earthquake, 98 active fault zones were selected for fundamental survey, with the purpose of efficiently conducting an active fault survey in 'Plans for Fundamental Seismic Survey and Observation' by the headquarters for earthquake research promotion, which was attached to the Prime Minister's office of Japan. Forty two administrative divisions for earthquake disaster prevention have investigated the distribution and history of fault activity of 80 active fault zones. Although earthquake prediction is difficult, the behaviour of major active faults in Japan is being recognised. Japan Nuclear Cycle Development Institute (JNC) submitted a report titled 'H12: Project to Establish the. Scientific and Technical Basis for HLW Disposal in Japan' to the Atomic Energy Commission (AEC) of Japan for official review W. The Guidelines, which were defined by AEC, require the H12 Project to confirm the basic technical feasibility of safe HLW disposal in Japan. In this report the important issues relating to fault activity were described that are to understand the characteristics of current fault movements and the spatial extent and magnitude of the effects caused by these movements, and to

  10. Fault Tolerant Wind Farm Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2013-01-01

    In the recent years the wind turbine industry has focused on optimizing the cost of energy. One of the important factors in this is to increase reliability of the wind turbines. Advanced fault detection, isolation and accommodation are important tools in this process. Clearly most faults are deal...... scenarios. This benchmark model is used in an international competition dealing with Wind Farm fault detection and isolation and fault tolerant control....

  11. SDEM modelling of fault-propagation folding

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Poulsen, Jane Bang

    2009-01-01

    and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset......Understanding the dynamics and kinematics of fault-propagation-folding is important for evaluating the associated hydrocarbon play, for accomplishing reliable section balancing (structural reconstruction), and for assessing seismic hazards. Accordingly, the deformation style of fault-propagation...... a precise indication of when faults develop and hence also the sequential evolution of secondary faults. Here we focus on the generation of a fault -propagated fold with a reverse sense of motion at the master fault, and varying only the dip of the master fault and the mechanical behaviour of the deformed...

  12. Fault Recoverability Analysis via Cross-Gramian

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza

    2016-01-01

    Engineering systems are vulnerable to different kinds of faults. Faults may compromise safety, cause sub-optimal operation and decline in performance if not preventing the whole system from functioning. Fault tolerant control (FTC) methods ensure that the system performance maintains within...... with feedback control. Fault recoverability provides important and useful information which could be used in analysis and design. However, computing fault recoverability is numerically expensive. In this paper, a new approach for computation of fault recoverability for bilinear systems is proposed...... approach for computation of fault recoverability is proposed which reduces the computational burden significantly. The proposed results are used for an electro-hydraulic drive to reveal the redundant actuating capabilities in the system....

  13. An architecture for fault tolerant controllers

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2005-01-01

    degradation in the sense of guaranteed degraded performance. A number of fault diagnosis problems, fault tolerant control problems, and feedback control with fault rejection problems are formulated/considered, mainly from a fault modeling point of view. The method is illustrated on a servo example including......A general architecture for fault tolerant control is proposed. The architecture is based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBK parameterization to quantify the performance of the fault tolerant system. The approach suggested can be applied...

  14. What is Fault Tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Frei, C. W.; Kraus, K.

    2000-01-01

    Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to the plant, to personnel or the environment. Fault-tolerant control is the synonym for a set of recent techniques that were developed to increase plant...... availability and reduce the risk of safety hazards. Its aim is to prevent that simple faults develop into serious failure. Fault-tolerant control merges several disciplines to achieve this goal, including on-line fault diagnosis, automatic condition assessment and calculation of remedial actions when a fault...... is detected. The envelope of the possible remedial actions is wide. This paper introduces tools to analyze and explore structure and other fundamental properties of an automated system such that any redundancy in the process can be fully utilized to enhance safety and a availability....

  15. Introduction to fault tree analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Lambert, H.E.

    1975-01-01

    An elementary, engineering oriented introduction to fault tree analysis is presented. The basic concepts, techniques and applications of fault tree analysis, FTA, are described. The two major steps of FTA are identified as (1) the construction of the fault tree and (2) its evaluation. The evaluation of the fault tree can be qualitative or quantitative depending upon the scope, extensiveness and use of the analysis. The advantages, limitations and usefulness of FTA are discussed

  16. Fault-tolerant control for current sensors of doubly fed induction generators based on an improved fault detection method

    DEFF Research Database (Denmark)

    Li, Hui; Yang, Chao; Hu, Yaogang

    2014-01-01

    Fault-tolerant control of current sensors is studied in this paper to improve the reliability of a doubly fed induction generator (DFIG). A fault-tolerant control system of current sensors is presented for the DFIG, which consists of a new current observer and an improved current sensor fault...... detection algorithm, and fault-tolerant control system are investigated by simulation. The results indicate that the outputs of the observer and the sensor are highly coherent. The fault detection algorithm can efficiently detect both soft and hard faults in current sensors, and the fault-tolerant control...

  17. Fault Isolation for Shipboard Decision Support

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2010-01-01

    Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation of a containe......Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation...... to the quality of decisions given to navigators....

  18. Multi-link faults localization and restoration based on fuzzy fault set for dynamic optical networks.

    Science.gov (United States)

    Zhao, Yongli; Li, Xin; Li, Huadong; Wang, Xinbo; Zhang, Jie; Huang, Shanguo

    2013-01-28

    Based on a distributed method of bit-error-rate (BER) monitoring, a novel multi-link faults restoration algorithm is proposed for dynamic optical networks. The concept of fuzzy fault set (FFS) is first introduced for multi-link faults localization, which includes all possible optical equipment or fiber links with a membership describing the possibility of faults. Such a set is characterized by a membership function which assigns each object a grade of membership ranging from zero to one. OSPF protocol extension is designed for the BER information flooding in the network. The BER information can be correlated to link faults through FFS. Based on the BER information and FFS, multi-link faults localization mechanism and restoration algorithm are implemented and experimentally demonstrated on a GMPLS enabled optical network testbed with 40 wavelengths in each fiber link. Experimental results show that the novel localization mechanism has better performance compared with the extended limited perimeter vector matching (LVM) protocol and the restoration algorithm can improve the restoration success rate under multi-link faults scenario.

  19. Active fault diagnosis by temporary destabilization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    An active fault diagnosis method for parametric or multiplicative faults is proposed. The method periodically adds a term to the controller that for a short period of time renders the system unstable if a fault has occurred, which facilitates rapid fault detection. An illustrative example is given....

  20. Fault trees for diagnosis of system fault conditions

    International Nuclear Information System (INIS)

    Lambert, H.E.; Yadigaroglu, G.

    1977-01-01

    Methods for generating repair checklists on the basis of fault tree logic and probabilistic importance are presented. A one-step-ahead optimization procedure, based on the concept of component criticality, minimizing the expected time to diagnose system failure is outlined. Options available to the operator of a nuclear power plant when system fault conditions occur are addressed. A low-pressure emergency core cooling injection system, a standby safeguard system of a pressurized water reactor power plant, is chosen as an example illustrating the methods presented

  1. Frictional and hydraulic behaviour of carbonate fault gouge during fault reactivation - An experimental study

    Science.gov (United States)

    Delle Piane, Claudio; Giwelli, Ausama; Clennell, M. Ben; Esteban, Lionel; Nogueira Kiewiet, Melissa Cristina D.; Kiewiet, Leigh; Kager, Shane; Raimon, John

    2016-10-01

    We present a novel experimental approach devised to test the hydro-mechanical behaviour of different structural elements of carbonate fault rocks during experimental re-activation. Experimentally faulted core plugs were subject to triaxial tests under water saturated conditions simulating depletion processes in reservoirs. Different fault zone structural elements were created by shearing initially intact travertine blocks (nominal size: 240 × 110 × 150 mm) to a maximum displacement of 20 and 120 mm under different normal stresses. Meso-and microstructural features of these sample and the thickness to displacement ratio characteristics of their deformation zones allowed to classify them as experimentally created damage zones (displacement of 20 mm) and fault cores (displacement of 120 mm). Following direct shear testing, cylindrical plugs with diameter of 38 mm were drilled across the slip surface to be re-activated in a conventional triaxial configuration monitoring the permeability and frictional behaviour of the samples as a function of applied stress. All re-activation experiments on faulted plugs showed consistent frictional response consisting of an initial fast hardening followed by apparent yield up to a friction coefficient of approximately 0.6 attained at around 2 mm of displacement. Permeability in the re-activation experiments shows exponential decay with increasing mean effective stress. The rate of permeability decline with mean effective stress is higher in the fault core plugs than in the simulated damage zone ones. It can be concluded that the presence of gouge in un-cemented carbonate faults results in their sealing character and that leakage cannot be achieved by renewed movement on the fault plane alone, at least not within the range of slip measureable with our apparatus (i.e. approximately 7 mm of cumulative displacement). Additionally, it is shown that under sub seismic slip rates re-activated carbonate faults remain strong and no frictional

  2. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    Science.gov (United States)

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  3. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT for Aquaculture

    Directory of Open Access Journals (Sweden)

    Yingyi Chen

    2017-01-01

    Full Text Available In the Internet of Things (IoT equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  4. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    Science.gov (United States)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  5. Rated-voltage enhancement by fast-breaking of the fault current for a resistive superconducting fault current limiter component

    International Nuclear Information System (INIS)

    Park, C.-R.; Kim, M.-J.; Yu, S.-D.; Yim, S.-W.; Kim, H.-R.; Hyun, O.-B.

    2010-01-01

    Performance of a resistive superconducting fault current limiter (SFCL) component is usually limited by temperature rise associated with energy input by fault current application during a fault. Therefore, it is expected that short application of the fault current may enhance the power ratings of the component. This can be accomplished by a combination of a HTS component and a mechanical switch. The fast switch (FS) developed recently enables the fault duration to be as short as 1/2 cycle after a fault. Various second-generation (2G) high temperature superconductors (HTS) and YBCO thin films have been tested. The relation between the rated voltage V and the fault duration time t was found to be V 2 ∼ t -1 . Based upon the relation, we predict that when the FS break the fault current within 1/2 cycle after a fault, the amount of HTS components required to build an SFCL can be reduced by as much as about 60%, of that when breaking the fault current at three cycles.

  6. Architecting Fault-Tolerant Software Systems

    NARCIS (Netherlands)

    Sözer, Hasan

    2009-01-01

    The increasing size and complexity of software systems makes it hard to prevent or remove all possible faults. Faults that remain in the system can eventually lead to a system failure. Fault tolerance techniques are introduced for enabling systems to recover and continue operation when they are

  7. Biotelemetry data for golden eagles (Aquila chrysaetos) captured in coastal southern California, November 2014–February 2016

    Science.gov (United States)

    Tracey, Jeff A.; Madden, Melanie C.; Sebes, Jeremy B.; Bloom, Peter H.; Katzner, Todd E.; Fisher, Robert N.

    2016-04-21

    The status of golden eagles (Aquila chrysaetos) in coastal southern California is unclear. To address this knowledge gap, the U.S. Geological Survey (USGS) in collaboration with local, State, and other Federal agencies began a multi-year survey and tracking program of golden eagles to address questions regarding habitat use, movement behavior, nest occupancy, genetic population structure, and human impacts on eagles. Golden eagle trapping and tracking efforts began in October 2014 and continued until early March 2015. During the first trapping season that focused on San Diego County, we captured 13 golden eagles (8 females and 5 males). During the second trapping season that began in November 2015, we focused on trapping sites in San Diego, Orange, and western Riverside Counties. By February 23, 2016, we captured an additional 14 golden eagles (7 females and 7 males). In this report, biotelemetry data were collected between November 22, 2014, and February 23, 2016. The location data for eagles ranged as far north as San Luis Obispo, California, and as far south as La Paz, Baja California, Mexico.

  8. The relationship of near-surface active faulting to megathrust splay fault geometry in Prince William Sound, Alaska

    Science.gov (United States)

    Finn, S.; Liberty, L. M.; Haeussler, P. J.; Northrup, C.; Pratt, T. L.

    2010-12-01

    We interpret regionally extensive, active faults beneath Prince William Sound (PWS), Alaska, to be structurally linked to deeper megathrust splay faults, such as the one that ruptured in the 1964 M9.2 earthquake. Western PWS in particular is unique; the locations of active faulting offer insights into the transition at the southern terminus of the previously subducted Yakutat slab to Pacific plate subduction. Newly acquired high-resolution, marine seismic data show three seismic facies related to Holocene and older Quaternary to Tertiary strata. These sediments are cut by numerous high angle normal faults in the hanging wall of megathrust splay. Crustal-scale seismic reflection profiles show splay faults emerging from 20 km depth between the Yakutat block and North American crust and surfacing as the Hanning Bay and Patton Bay faults. A distinct boundary coinciding beneath the Hinchinbrook Entrance causes a systematic fault trend change from N30E in southwestern PWS to N70E in northeastern PWS. The fault trend change underneath Hinchinbrook Entrance may occur gradually or abruptly and there is evidence for similar deformation near the Montague Strait Entrance. Landward of surface expressions of the splay fault, we observe subsidence, faulting, and landslides that record deformation associated with the 1964 and older megathrust earthquakes. Surface exposures of Tertiary rocks throughout PWS along with new apatite-helium dates suggest long-term and regional uplift with localized, fault-controlled subsidence.

  9. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  10. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip.

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  11. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  12. A Poisson-Fault Model for Testing Power Transformers in Service

    Directory of Open Access Journals (Sweden)

    Dengfu Zhao

    2014-01-01

    Full Text Available This paper presents a method for assessing the instant failure rate of a power transformer under different working conditions. The method can be applied to a dataset of a power transformer under periodic inspections and maintenance. We use a Poisson-fault model to describe failures of a power transformer. When investigating a Bayes estimate of the instant failure rate under the model, we find that complexities of a classical method and a Monte Carlo simulation are unacceptable. Through establishing a new filtered estimate of Poisson process observations, we propose a quick algorithm of the Bayes estimate of the instant failure rate. The proposed algorithm is tested by simulation datasets of a power transformer. For these datasets, the proposed estimators of parameters of the model have better performance than other estimators. The simulation results reveal the suggested algorithms are quickest among three candidates.

  13. Fault-tolerant reference generation for model predictive control with active diagnosis of elevator jamming faults

    NARCIS (Netherlands)

    Ferranti, L.; Wan, Y.; Keviczky, T.

    2018-01-01

    This paper focuses on the longitudinal control of an Airbus passenger aircraft in the presence of elevator jamming faults. In particular, in this paper, we address permanent and temporary actuator jamming faults using a novel reconfigurable fault-tolerant predictive control design. Due to their

  14. Fault Detection for Diesel Engine Actuator

    DEFF Research Database (Denmark)

    Blanke, M.; Bøgh, S.A.; Jørgensen, R.B.

    1994-01-01

    Feedback control systems are vulnerable to faults in control loop sensors and actuators, because feedback actions may cause abrupt responses and process damage when faults occur.......Feedback control systems are vulnerable to faults in control loop sensors and actuators, because feedback actions may cause abrupt responses and process damage when faults occur....

  15. Fault Severity Evaluation and Improvement Design for Mechanical Systems Using the Fault Injection Technique and Gini Concordance Measure

    Directory of Open Access Journals (Sweden)

    Jianing Wu

    2014-01-01

    Full Text Available A new fault injection and Gini concordance based method has been developed for fault severity analysis for multibody mechanical systems concerning their dynamic properties. The fault tree analysis (FTA is employed to roughly identify the faults needed to be considered. According to constitution of the mechanical system, the dynamic properties can be achieved by solving the equations that include many types of faults which are injected by using the fault injection technique. Then, the Gini concordance is used to measure the correspondence between the performance with faults and under normal operation thereby providing useful hints of severity ranking in subsystems for reliability design. One numerical example and a series of experiments are provided to illustrate the application of the new method. The results indicate that the proposed method can accurately model the faults and receive the correct information of fault severity. Some strategies are also proposed for reliability improvement of the spacecraft solar array.

  16. Synthesis of Fault-Tolerant Embedded Systems

    DEFF Research Database (Denmark)

    Eles, Petru; Izosimov, Viacheslav; Pop, Paul

    2008-01-01

    This work addresses the issue of design optimization for fault- tolerant hard real-time systems. In particular, our focus is on the handling of transient faults using both checkpointing with rollback recovery and active replication. Fault tolerant schedules are generated based on a conditional...... process graph representation. The formulated system synthesis approaches decide the assignment of fault-tolerance policies to processes, the optimal placement of checkpoints and the mapping of processes to processors, such that multiple transient faults are tolerated, transparency requirements...

  17. Active fault diagnosis by controller modification

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    2010-01-01

    Two active fault diagnosis methods for additive or parametric faults are proposed. Both methods are based on controller reconfiguration rather than on requiring an exogenous excitation signal, as it is otherwise common in active fault diagnosis. For the first method, it is assumed that the system...... considered is controlled by an observer-based controller. The method is then based on a number of alternate observers, each designed to be sensitive to one or more additive faults. Periodically, the observer part of the controller is changed into the sequence of fault sensitive observers. This is done...... in a way that guarantees the continuity of transition and global stability using a recent result on observer parameterization. An illustrative example inspired by a field study of a drag racing vehicle is given. For the second method, an active fault diagnosis method for parametric faults is proposed...

  18. MULTISCALE DOCUMENTATION AND MONITORING OF L’AQUILA HISTORICAL CENTRE USING UAV PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    D. Dominici

    2017-05-01

    Full Text Available Nowadays geomatic techniques can guarantee not only a precise and accurate survey for the documentation of our historical heritage but also a solution to monitor its behaviour over time after, for example, a catastrophic event (earthquakes, landslides, ecc. Europe is trying to move towards harmonized actions to store information on cultural heritage (MIBAC with the ICCS forms, English heritage with the MIDAS scheme, etc but it would be important to provide standardized methods in order to perform measuring operations to collect certified metric data. The final result could be a database to support the entire management of the cultural heritage and also a checklist of “what to do” and “when to do it”. The wide range of geomatic techniques provides many solutions to acquire, to organize and to manage data at a multiscale level: high resolution satellite images can provide information in a short time during the “early emergency” while UAV photogrammetry and laser scanning can provide digital high resolution 3D models of buildings, ortophotos of roofs and facades and so on. This paper presents some multiscale survey case studies using UAV photogrammetry: from a minor historical village (Aielli to the centre of L’Aquila (Santa Maria di Collemaggio Church from the post-emergency to now. This choice has been taken not only to present how geomatics is an effective science for modelling but also to present a complete and reliable way to perform conservation and/or restoration through precise monitoring techniques, as shown in the third case study.

  19. Multiscale Documentation and Monitoring of L'aquila Historical Centre Using Uav Photogrammetry

    Science.gov (United States)

    Dominici, D.; Alicandro, M.; Rosciano, E.; Massimi, V.

    2017-05-01

    Nowadays geomatic techniques can guarantee not only a precise and accurate survey for the documentation of our historical heritage but also a solution to monitor its behaviour over time after, for example, a catastrophic event (earthquakes, landslides, ecc). Europe is trying to move towards harmonized actions to store information on cultural heritage (MIBAC with the ICCS forms, English heritage with the MIDAS scheme, etc) but it would be important to provide standardized methods in order to perform measuring operations to collect certified metric data. The final result could be a database to support the entire management of the cultural heritage and also a checklist of "what to do" and "when to do it". The wide range of geomatic techniques provides many solutions to acquire, to organize and to manage data at a multiscale level: high resolution satellite images can provide information in a short time during the "early emergency" while UAV photogrammetry and laser scanning can provide digital high resolution 3D models of buildings, ortophotos of roofs and facades and so on. This paper presents some multiscale survey case studies using UAV photogrammetry: from a minor historical village (Aielli) to the centre of L'Aquila (Santa Maria di Collemaggio Church) from the post-emergency to now. This choice has been taken not only to present how geomatics is an effective science for modelling but also to present a complete and reliable way to perform conservation and/or restoration through precise monitoring techniques, as shown in the third case study.

  20. La sismicita del « Campo fagliato » dell'Aterno

    Directory of Open Access Journals (Sweden)

    F. PERONACI

    1964-06-01

    Full Text Available After determining the epicentral coordinates, the originhour,
    the depth, and the dromochrones of the earthquaqe of June 24, 1958,
    at L'Aquila, the physical nature of the quake at its hypocentre is esamined,
    thus completing a lifting scheme. The comparison between the seismic, geological,
    and tectonic nature of the Aterno valley supplies, together with the
    results of the above-mentioned micro-seismic study, a probable explanation
    of the faulted field.

  1. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  2. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  3. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  4. Fault diagnostics of dynamic system operation using a fault tree based method

    International Nuclear Information System (INIS)

    Hurdle, E.E.; Bartlett, L.M.; Andrews, J.D.

    2009-01-01

    For conventional systems, their availability can be considerably improved by reducing the time taken to restore the system to the working state when faults occur. Fault identification can be a significant proportion of the time taken in the repair process. Having diagnosed the problem the restoration of the system back to its fully functioning condition can then take place. This paper expands the capability of previous approaches to fault detection and identification using fault trees for application to dynamically changing systems. The technique has two phases. The first phase is modelling and preparation carried out offline. This gathers information on the effects that sub-system failure will have on the system performance. Causes of the sub-system failures are developed in the form of fault trees. The second phase is application. Sensors are installed on the system to provide information about current system performance from which the potential causes can be deduced. A simple system example is used to demonstrate the features of the method. To illustrate the potential for the method to deal with additional system complexity and redundancy, a section from an aircraft fuel system is used. A discussion of the results is provided.

  5. Thermal studies of a superconducting current limiter using Monte-Carlo method

    Science.gov (United States)

    Lévêque, J.; Rezzoug, A.

    1999-07-01

    Considering the increase of the fault current level in electrical network, the current limiters become very interesting. The superconducting limiters are based on the quasi-instantaneous intrinsic transition from superconducting state to normal resistive one. Without detection of default or given order, they reduce the constraints supported by electrical installations above the fault. To avoid the destruction of the superconducting coil, the temperature must not exceed a certain value. Therefore the design of a superconducting coil needs the simultaneous resolution of an electrical equation and a thermal one. This papers deals with a resolution of this coupled problem by the method of Monte-Carlo. This method allows us to calculate the evolution of the resistance of the coil as well as the current of limitation. Experimental results are compared with theoretical ones. L'augmentation des courants de défaut dans les grands réseaux électriques ravive l'intérêt pour les limiteurs de courant. Les limiteurs supraconducteurs de courants peuvent limiter quasi-instantanément, sans donneur d'ordre ni détection de défaut, les courants de court-circuit réduisant ainsi les contraintes supportées par les installations électriques situées en amont du défaut. La limitation s'accompagne nécessairement de la transition du supraconducteur par dépassement de son courant critique. Pour éviter la destruction de la bobine supraconductrice la température ne doit pas excéder une certaine valeur. La conception d'une bobine supraconductrice exige donc la résolution simultanée d'une équation électrique et d'une équation thermique. Nous présentons une résolution de ce problème electrothermique par la méthode de Monte-Carlo. Cette méthode nous permet de calculer l'évolution de la résistance de la bobine et du courant de limitation. Des résultats expérimentaux sont comparés avec les résultats théoriques.

  6. Simulation of Co-Seismic Off-Fault Stress Effects: Influence of Fault Roughness and Pore Pressure Coupling

    Science.gov (United States)

    Fälth, B.; Lund, B.; Hökmark, H.

    2017-12-01

    Aiming at improved safety assessment of geological nuclear waste repositories, we use dynamic 3D earthquake simulations to estimate the potential for co-seismic off-fault distributed fracture slip. Our model comprises a 12.5 x 8.5 km strike-slip fault embedded in a full space continuum where we apply a homogeneous initial stress field. In the reference case (Case 1) the fault is planar and oriented optimally for slip, given the assumed stress field. To examine the potential impact of fault roughness, we also study cases where the fault surface has undulations with self-similar fractal properties. In both the planar and the undulated cases the fault has homogeneous frictional properties. In a set of ten rough fault models (Case 2), the fault friction is equal to that of Case 1, meaning that these models generate lower seismic moments than Case 1. In another set of ten rough fault models (Case 3), the fault dynamic friction is adjusted such that seismic moments on par with that of Case 1 are generated. For the propagation of the earthquake rupture we adopt the linear slip-weakening law and obtain Mw 6.4 in Case 1 and Case 3, and Mw 6.3 in Case 2 (35 % lower moment than Case 1). During rupture we monitor the off-fault stress evolution along the fault plane at 250 m distance and calculate the corresponding evolution of the Coulomb Failure Stress (CFS) on optimally oriented hypothetical fracture planes. For the stress-pore pressure coupling, we assume Skempton's coefficient B = 0.5 as a base case value, but also examine the sensitivity to variations of B. We observe the following: (I) The CFS values, and thus the potential for fracture slip, tend to increase with the distance from the hypocenter. This is in accordance with results by other authors. (II) The highest CFS values are generated by quasi-static stress concentrations around fault edges and around large scale fault bends, where we obtain values of the order of 10 MPa. (III) Locally, fault roughness may have a

  7. Fuzzy fault diagnosis system of MCFC

    Institute of Scientific and Technical Information of China (English)

    Wang Zhenlei; Qian Feng; Cao Guangyi

    2005-01-01

    A kind of fault diagnosis system of molten carbonate fuel cell (MCFC) stack is proposed in this paper. It is composed of a fuzzy neural network (FNN) and a fault diagnosis element. FNN is able to deal with the information of the expert knowledge and the experiment data efficiently. It also has the ability to approximate any smooth system. FNN is used to identify the fault diagnosis model of MCFC stack. The fuzzy fault decision element can diagnose the state of the MCFC generating system, normal or fault, and can decide the type of the fault based on the outputs of FNN model and the MCFC system. Some simulation experiment results are demonstrated in this paper.

  8. 5 CFR 845.302 - Fault.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission that...

  9. 20 CFR 410.561b - Fault.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see § 410...

  10. From tomographic images to fault heterogeneities

    Directory of Open Access Journals (Sweden)

    A. Amato

    1994-06-01

    Full Text Available Local Earthquake Tomography (LET is a useful tool for imaging lateral heterogeneities in the upper crust. The pattern of P- and S-wave velocity anomalies, in relation to the seismicity distribution along active fault zones. can shed light on the existence of discrete seismogenic patches. Recent tomographic studies in well monitored seismic areas have shown that the regions with large seismic moment release generally correspond to high velocity zones (HVZ's. In this paper, we discuss the relationship between the seismogenic behavior of faults and the velocity structure of fault zones as inferred from seismic tomography. First, we review some recent tomographic studies in active strike-slip faults. We show examples from different segments of the San Andreas fault system (Parkfield, Loma Prieta, where detailed studies have been carried out in recent years. We also show two applications of LET to thrust faults (Coalinga, Friuli. Then, we focus on the Irpinia normal fault zone (South-Central Italy, where a Ms = 6.9 earthquake occurred in 1980 and many thousands of attershock travel time data are available. We find that earthquake hypocenters concentrate in HVZ's, whereas low velocity zones (LVZ’ s appear to be relatively aseismic. The main HVZ's along which the mainshock rupture bas propagated may correspond to velocity weakening fault regions, whereas the LVZ's are probably related to weak materials undergoing stable slip (velocity strengthening. A correlation exists between this HVZ and the area with larger coseismic slip along the fault, according to both surface evidence (a fault scarp as high as 1 m and strong ground motion waveform modeling. Smaller wave-length, low-velocity anomalies detected along the fault may be the expression of velocity strengthening sections, where aseismic slip occurs. According to our results, the rupture at the nucleation depth (~ 10-12 km is continuous for the whole fault lenoth (~ 30 km, whereas at shallow depth

  11. Fault lubrication during earthquakes.

    Science.gov (United States)

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  12. Laboratory scale micro-seismic monitoring of rock faulting and injection-induced fault reactivation

    Science.gov (United States)

    Sarout, J.; Dautriat, J.; Esteban, L.; Lumley, D. E.; King, A.

    2017-12-01

    The South West Hub CCS project in Western Australia aims to evaluate the feasibility and impact of geosequestration of CO2 in the Lesueur sandstone formation. Part of this evaluation focuses on the feasibility and design of a robust passive seismic monitoring array. Micro-seismicity monitoring can be used to image the injected CO2plume, or any geomechanical fracture/fault activity; and thus serve as an early warning system by measuring low-level (unfelt) seismicity that may precede potentially larger (felt) earthquakes. This paper describes laboratory deformation experiments replicating typical field scenarios of fluid injection in faulted reservoirs. Two pairs of cylindrical core specimens were recovered from the Harvey-1 well at depths of 1924 m and 2508 m. In each specimen a fault is first generated at the in situ stress, pore pressure and temperature by increasing the vertical stress beyond the peak in a triaxial stress vessel at CSIRO's Geomechanics & Geophysics Lab. The faulted specimen is then stabilized by decreasing the vertical stress. The freshly formed fault is subsequently reactivated by brine injection and increase of the pore pressure until slip occurs again. This second slip event is then controlled in displacement and allowed to develop for a few millimeters. The micro-seismic (MS) response of the rock during the initial fracturing and subsequent reactivation is monitored using an array of 16 ultrasonic sensors attached to the specimen's surface. The recorded MS events are relocated in space and time, and correlate well with the 3D X-ray CT images of the specimen obtained post-mortem. The time evolution of the structural changes induced within the triaxial stress vessel is therefore reliably inferred. The recorded MS activity shows that, as expected, the increase of the vertical stress beyond the peak led to an inclined shear fault. The injection of fluid and the resulting increase in pore pressure led first to a reactivation of the pre

  13. Final Technical Report: PV Fault Detection Tool.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Christian Birk [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  14. Layered Fault Management Architecture

    National Research Council Canada - National Science Library

    Sztipanovits, Janos

    2004-01-01

    ... UAVs or Organic Air Vehicles. The approach of this effort was to analyze fault management requirements of formation flight for fleets of UAVs, and develop a layered fault management architecture which demonstrates significant...

  15. Distributed bearing fault diagnosis based on vibration analysis

    Science.gov (United States)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  16. High stresses stored in fault zones: example of the Nojima fault (Japan)

    Science.gov (United States)

    Boullier, Anne-Marie; Robach, Odile; Ildefonse, Benoît; Barou, Fabrice; Mainprice, David; Ohtani, Tomoyuki; Fujimoto, Koichiro

    2018-04-01

    During the last decade pulverized rocks have been described on outcrops along large active faults and attributed to damage related to a propagating seismic rupture front. Questions remain concerning the maximal lateral distance from the fault plane and maximal depth for dynamic damage to be imprinted in rocks. In order to document these questions, a representative core sample of granodiorite located 51.3 m from the Nojima fault (Japan) that was drilled after the Hyogo-ken Nanbu (Kobe) earthquake is studied by using electron backscattered diffraction (EBSD) and high-resolution X-ray Laue microdiffraction. Although located outside of the Nojima damage fault zone and macroscopically undeformed, the sample shows pervasive microfractures and local fragmentation. These features are attributed to the first stage of seismic activity along the Nojima fault characterized by laumontite as the main sealing mineral. EBSD mapping was used in order to characterize the crystallographic orientation and deformation microstructures in the sample, and X-ray microdiffraction was used to measure elastic strain and residual stresses on each point of the mapped quartz grain. Both methods give consistent results on the crystallographic orientation and show small and short wavelength misorientations associated with laumontite-sealed microfractures and alignments of tiny fluid inclusions. Deformation microstructures in quartz are symptomatic of the semi-brittle faulting regime, in which low-temperature brittle plastic deformation and stress-driven dissolution-deposition processes occur conjointly. This deformation occurred at a 3.7-11.1 km depth interval as indicated by the laumontite stability domain. Residual stresses are calculated from deviatoric elastic strain tensor measured using X-ray Laue microdiffraction using the Hooke's law. The modal value of the von Mises stress distribution is at 100 MPa and the mean at 141 MPa. Such stress values are comparable to the peak strength of a

  17. High stresses stored in fault zones: example of the Nojima fault (Japan

    Directory of Open Access Journals (Sweden)

    A.-M. Boullier

    2018-04-01

    Full Text Available During the last decade pulverized rocks have been described on outcrops along large active faults and attributed to damage related to a propagating seismic rupture front. Questions remain concerning the maximal lateral distance from the fault plane and maximal depth for dynamic damage to be imprinted in rocks. In order to document these questions, a representative core sample of granodiorite located 51.3 m from the Nojima fault (Japan that was drilled after the Hyogo-ken Nanbu (Kobe earthquake is studied by using electron backscattered diffraction (EBSD and high-resolution X-ray Laue microdiffraction. Although located outside of the Nojima damage fault zone and macroscopically undeformed, the sample shows pervasive microfractures and local fragmentation. These features are attributed to the first stage of seismic activity along the Nojima fault characterized by laumontite as the main sealing mineral. EBSD mapping was used in order to characterize the crystallographic orientation and deformation microstructures in the sample, and X-ray microdiffraction was used to measure elastic strain and residual stresses on each point of the mapped quartz grain. Both methods give consistent results on the crystallographic orientation and show small and short wavelength misorientations associated with laumontite-sealed microfractures and alignments of tiny fluid inclusions. Deformation microstructures in quartz are symptomatic of the semi-brittle faulting regime, in which low-temperature brittle plastic deformation and stress-driven dissolution-deposition processes occur conjointly. This deformation occurred at a 3.7–11.1 km depth interval as indicated by the laumontite stability domain. Residual stresses are calculated from deviatoric elastic strain tensor measured using X-ray Laue microdiffraction using the Hooke's law. The modal value of the von Mises stress distribution is at 100 MPa and the mean at 141 MPa. Such stress values are comparable to

  18. Subaru FATS (fault tracking system)

    Science.gov (United States)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  19. Influence of mineralogy and microstructures on strain localization and fault zone architecture of the Alpine Fault, New Zealand

    Science.gov (United States)

    Ichiba, T.; Kaneki, S.; Hirono, T.; Oohashi, K.; Schuck, B.; Janssen, C.; Schleicher, A.; Toy, V.; Dresen, G.

    2017-12-01

    The Alpine Fault on New Zealand's South Island is an oblique, dextral strike-slip fault that accommodated the majority of displacement between the Pacific and the Australian Plates and presents the biggest seismic hazard in the region. Along its central segment, the hanging wall comprises greenschist and amphibolite facies Alpine Schists. Exhumation from 35 km depth, along a SE-dipping detachment, lead to mylonitization which was subsequently overprinted by brittle deformation and finally resulted in the fault's 1 km wide damage zone. The geomechanical behavior of a fault is affected by the internal structure of its fault zone. Consequently, studying processes controlling fault zone architecture allows assessing the seismic hazard of a fault. Here we present the results of a combined microstructural (SEM and TEM), mineralogical (XRD) and geochemical (XRF) investigation of outcrop samples originating from several locations along the Alpine Fault, the aim of which is to evaluate the influence of mineralogical composition, alteration and pre-existing fabric on strain localization and to identify the controls on the fault zone architecture, particularly the locus of brittle deformation in P, T and t space. Field observations reveal that the fault's principal slip zone (PSZ) is either a thin (< 1 cm to < 7 cm) layered structure or a relatively thick (10s cm) package lacking a detectable macroscopic fabric. Lithological and related rheological contrasts are widely assumed to govern strain localization. However, our preliminary results suggest that qualitative mineralogical composition has only minor impact on fault zone architecture. Quantities of individual mineral phases differ markedly between fault damage zone and fault core at specific sites, but the quantitative composition of identical structural units such as the fault core, is similar in all samples. This indicates that the degree of strain localization at the Alpine Fault might be controlled by small initial

  20. 5 CFR 831.1402 - Fault.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The fact...

  1. Fault tree analysis

    International Nuclear Information System (INIS)

    1981-09-01

    Suggestion are made concerning the method of the fault tree analysis, the use of certain symbols in the examination of system failures. This purpose of the fault free analysis is to find logical connections of component or subsystem failures leading to undesirable occurrances. The results of these examinations are part of the system assessment concerning operation and safety. The objectives of the analysis are: systematical identification of all possible failure combinations (causes) leading to a specific undesirable occurrance, finding of reliability parameters such as frequency of failure combinations, frequency of the undesirable occurrance or non-availability of the system when required. The fault tree analysis provides a near and reconstructable documentation of the examination. (orig./HP) [de

  2. Fault tolerant operation of switched reluctance machine

    Science.gov (United States)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  3. Quaternary faulting in the Tatra Mountains, evidence from cave morphology and fault-slip analysis

    Directory of Open Access Journals (Sweden)

    Szczygieł Jacek

    2015-06-01

    Full Text Available Tectonically deformed cave passages in the Tatra Mts (Central Western Carpathians indicate some fault activity during the Quaternary. Displacements occur in the youngest passages of the caves indicating (based on previous U-series dating of speleothems an Eemian or younger age for those faults, and so one tectonic stage. On the basis of stress analysis and geomorphological observations, two different mechanisms are proposed as responsible for the development of these displacements. The first mechanism concerns faults that are located above the valley bottom and at a short distance from the surface, with fault planes oriented sub-parallel to the slopes. The radial, horizontal extension and vertical σ1 which is identical with gravity, indicate that these faults are the result of gravity sliding probably caused by relaxation after incision of valleys, and not directly from tectonic activity. The second mechanism is tilting of the Tatra Mts. The faults operated under WNW-ESE oriented extension with σ1 plunging steeply toward the west. Such a stress field led to normal dip-slip or oblique-slip displacements. The faults are located under the valley bottom and/or opposite or oblique to the slopes. The process involved the pre-existing weakest planes in the rock complex: (i in massive limestone mostly faults and fractures, (ii in thin-bedded limestone mostly inter-bedding planes. Thin-bedded limestones dipping steeply to the south are of particular interest. Tilting toward the N caused the hanging walls to move under the massif and not toward the valley, proving that the cause of these movements was tectonic activity and not gravity.

  4. Fault Diagnosis for Actuators in a Class of Nonlinear Systems Based on an Adaptive Fault Detection Observer

    Directory of Open Access Journals (Sweden)

    Runxia Guo

    2016-01-01

    Full Text Available The problem of actuators’ fault diagnosis is pursued for a class of nonlinear control systems that are affected by bounded measurement noise and external disturbances. A novel fault diagnosis algorithm has been proposed by combining the idea of adaptive control theory and the approach of fault detection observer. The asymptotical stability of the fault detection observer is guaranteed by setting the adaptive adjusting law of the unknown fault vector. A theoretically rigorous proof of asymptotical stability has been given. Under the condition that random measurement noise generated by the sensors of control systems and external disturbances exist simultaneously, the designed fault diagnosis algorithm is able to successfully give specific estimated values of state variables and failures rather than just giving a simple fault warning. Moreover, the proposed algorithm is very simple and concise and is easy to be applied to practical engineering. Numerical experiments are carried out to evaluate the performance of the fault diagnosis algorithm. Experimental results show that the proposed diagnostic strategy has a satisfactory estimation effect.

  5. Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)

    International Nuclear Information System (INIS)

    Fonti, Roberta; Barthel, Rainer; Formisano, Antonio; Borri, Antonio; Candela, Michele

    2015-01-01

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed

  6. Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)

    Energy Technology Data Exchange (ETDEWEB)

    Fonti, Roberta, E-mail: roberta.fonti@tum.de; Barthel, Rainer, E-mail: r.barthel@lrz.tu-muenchen.de [TUM University, Chair of Structural Design, Arcisstraße 21, 80333 Munich (Germany); Formisano, Antonio, E-mail: antoform@unina.it [University of Naples “Federico II”, DIST Department, P.le V. Tecchio, 80, 80125 Naples (Italy); Borri, Antonio, E-mail: antonio.borri@unipg.it [University of Perugia, Department of Engineering, Via G. Duranti 95, 06125 Perugia (Italy); Candela, Michele, E-mail: ing.mcandela@libero.it [University of Reggio Calabria, PAU Department, Salita Melissari 1, 89124 Reggio Calabria (Italy)

    2015-12-31

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed.

  7. Rubble masonry response under cyclic actions: The experience of L'Aquila city (Italy)

    Science.gov (United States)

    Fonti, Roberta; Barthel, Rainer; Formisano, Antonio; Borri, Antonio; Candela, Michele

    2015-12-01

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different "modes of damage" of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L'Aquila district is discussed.

  8. Movements and landscape use of Eastern Imperial Eagles Aquila heliaca in Central Asia

    Science.gov (United States)

    Poessel, Sharon; Bragin, Evgeny A.; Sharpe, Peter B.; Garcelon, David K.; Bartoszuk, Kordian; Katzner, Todd E.

    2018-01-01

    Capsule: We describe ecological factors associated with movements of a globally declining raptor species, the Eastern Imperial Eagle Aquila heliaca.Aims: To describe the movements, habitat associations and resource selection of Eastern Imperial Eagles marked in Central Asia.Methods: We used global positioning system (GPS) data sent via satellite telemetry devices deployed on Eastern Imperial Eagles captured in Kazakhstan to calculate distances travelled and to associate habitat and weather variables with eagle locations collected throughout the annual cycle. We also used resource selection models to evaluate habitat use of tracked birds during autumn migration. Separately, we used wing-tagging recovery data to broaden our understanding of wintering locations of eagles.Results: Eagles tagged in Kazakhstan wintered in most countries on the Arabian Peninsula, as well as Iran and India. The adult eagle we tracked travelled more efficiently than did the four pre-adults. During autumn migration, telemetered eagles used a mixture of vegetation types, but during winter and summer, they primarily used bare and sparsely vegetated areas. Finally, telemetered birds used orographic updrafts to subsidize their autumn migration flight, but they relied on thermal updrafts during spring migration.Conclusion: Our study is the first to use GPS telemetry to describe year-round movements and habitat associations of Eastern Imperial Eagles in Central Asia. Our findings provide insight into the ecology of this vulnerable raptor species that can contribute to conservation efforts on its behalf.

  9. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  10. Efficient fault-ride-through control strategy of DFIG-based wind turbines during the grid faults

    International Nuclear Information System (INIS)

    Mohammadi, J.; Afsharnia, S.; Vaez-Zadeh, S.

    2014-01-01

    Highlights: • A comparative review of DFIGs fault-ride-through improvement approaches is presented. • An efficient control strategy is proposed to improve the FRT capability of DFIG. • The rotor overcurrent, DC-link overvoltage and torque oscillations are decreased. • The RSC, DC-link capacitor and mechanical parts are kept safe during the grid faults. • The DFIG remains connected to the grid during the symmetrical and asymmetrical faults. - Abstract: As the penetration of wind power in electrical power system increases, it is necessary that wind turbines remain connected to the grid and contribute to the system stability during and after the grid faults. This paper proposes an efficient control strategy to improve the fault ride through (FRT) capability of doubly fed induction generator (DFIG) during the symmetrical and asymmetrical grid faults. The proposed scheme consists of active and passive FRT compensators. The active compensator is carried out by determining the rotor current references to reduce the rotor over voltages. The passive compensator is based on rotor current limiter (RCL) that considerably reduces the rotor inrush currents at the instants of occurring and clearing the grid faults with deep sags. By applying the proposed strategy, negative effects of the grid faults in the DFIG system including the rotor over currents, electromagnetic torque oscillations and DC-link over voltage are decreased. The system simulation results confirm the effectiveness of the proposed control strategy

  11. Fault tree graphics

    International Nuclear Information System (INIS)

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  12. Structural analysis of cataclastic rock of active fault damage zones: An example from Nojima and Arima-Takatsuki fault zones (SW Japan)

    Science.gov (United States)

    Satsukawa, T.; Lin, A.

    2016-12-01

    Most of the large intraplate earthquakes which occur as slip on mature active faults induce serious damages, in spite of their relatively small magnitudes comparing to subduction-zone earthquakes. After 1995 Kobe Mw7.2 earthquake, a number of studies have been done to understand the structure, physical properties and dynamic phenomenon of active faults. However, the deformation mechanics and related earthquake generating mechanism in the intraplate active fault zone are still poorly understood. The detailed, multi-scalar structural analysis of faults and of fault rocks has to be the starting point for reconstructing the complex framework of brittle deformation. Here, we present two examples of active fault damage zones: Nojima fault and Arima-Takatsuki active fault zone in the southwest Japan. We perform field investigations, combined with meso-and micro-structural analyses of fault-related rocks, which provide the important information in reconstructing the long-term seismic faulting behavior and tectonic environment. Our study shows that in both sites, damage zone is observed in over 10m, which is composed by the host rocks, foliated and non-foliated cataclasites, fault gouge and fault breccia. The slickenside striations in Asano fault, the splay fault of Nojima fault, indicate a dextral movement sense with some normal components. Whereas, those of Arima-Takatsuki active fault shows a dextral strike-slip fault with minor vertical component. Fault gouges consist of brown-gray matrix of fine grains and composed by several layers from few millimeters to a few decimeters. It implies that slip is repeated during millions of years, as the high concentration and physical interconnectivity of fine-grained minerals in brittle fault rocks produce the fault's intrinsic weakness in the crust. Therefore, faults rarely express only on single, discrete deformation episode, but are the cumulative result of several superimposed slip events.

  13. 20 CFR 255.11 - Fault.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than the...

  14. The timing of fault motion in Death Valley from Illite Age Analysis of fault gouge

    Science.gov (United States)

    Lynch, E. A.; Haines, S. H.; Van der Pluijm, B.

    2014-12-01

    We constrained the timing of fluid circulation and associated fault motion in the Death Valley region of the US Basin and Range Province from Illite Age Analysis (IAA) of fault gouge at seven Low-Angle Normal Fault (LANF) exposures in the Black Mountains and Panamint Mountains, and in two nearby areas. 40Ar/39Ar ages of neoformed, illitic clay minerals in these fault zones range from 2.8 Ma to 18.6 Ma, preserving asynchronous fault motion across the region that corresponds to an evolving history of crustal block movements during Neogene extensional deformation. From north to south, along the western side of the Panamint Range, the Mosaic Canyon fault yields an authigenic illite age of 16.9±2.9 Ma, the Emigrant fault has ages of less than 10-12 Ma at Tucki Mountain and Wildrose Canyon, and an age of 3.6±0.17 Ma was obtained for the Panamint Front Range LANF at South Park Canyon. Across Death Valley, along the western side of the Black Mountains, Ar ages of clay minerals are 3.2±3.9 Ma, 12.2±0.13 Ma and 2.8±0.45 Ma for the Amargosa Detachment, the Gregory Peak Fault and the Mormon Point Turtleback detachment, respectively. Complementary analysis of the δH composition of neoformed clays shows a primarily meteoric source for the mineralizing fluids in these LANF zones. The ages fall into two geologic timespans, reflecting activity pulses in the Middle Miocene and in the Upper Pliocene. Activity on both of the range front LANFs does not appear to be localized on any single portion of these fault systems. Middle Miocene fault rock ages of neoformed clays were also obtained in the Ruby Mountains (10.5±1.2 Ma) to the north of the Death Valley region and to the south in the Whipple Mountains (14.3±0.19 Ma). The presence of similar, bracketed times of activity indicate that LANFs in the Death Valley region were tectonically linked, while isotopic signatures indicate that faulting pulses involved surface fluid penetration.

  15. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  16. A study on quantification of unavailability of DPPS with fault tolerant techniques considering fault tolerant techniques' characteristics

    International Nuclear Information System (INIS)

    Kim, B. G.; Kang, H. G.; Kim, H. E.; Seung, P. H.; Kang, H. G.; Lee, S. J.

    2012-01-01

    With the improvement of digital technologies, digital I and C systems have included more various fault tolerant techniques than conventional analog I and C systems have, in order to increase fault detection and to help the system safely perform the required functions in spite of the presence of faults. So, in the reliability evaluation of digital systems, the fault tolerant techniques (FTTs) and their fault coverage must be considered. To consider the effects of FTTs in a digital system, there have been several studies on the reliability of digital model. Therefore, this research based on literature survey attempts to develop a model to evaluate the plant reliability of the digital plant protection system (DPPS) with fault tolerant techniques considering detection and process characteristics and human errors. Sensitivity analysis is performed to ascertain important variables from the fault management coverage and unavailability based on the proposed model

  17. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  18. Stress near geometrically complex strike-slip faults - Application to the San Andreas fault at Cajon Pass, southern California

    Science.gov (United States)

    Saucier, Francois; Humphreys, Eugene; Weldon, Ray, II

    1992-01-01

    A model is presented to rationalize the state of stress near a geometrically complex major strike-slip fault. Slip on such a fault creates residual stresses that, with the occurrence of several slip events, can dominate the stress field near the fault. The model is applied to the San Andreas fault near Cajon Pass. The results are consistent with the geological features, seismicity, the existence of left-lateral stress on the Cleghorn fault, and the in situ stress orientation in the scientific well, found to be sinistral when resolved on a plane parallel to the San Andreas fault. It is suggested that the creation of residual stresses caused by slip on a wiggle San Andreas fault is the dominating process there.

  19. Fault zone architecture of a major oblique-slip fault in the Rawil depression, Western Helvetic nappes, Switzerland

    Science.gov (United States)

    Gasser, D.; Mancktelow, N. S.

    2009-04-01

    The Helvetic nappes in the Swiss Alps form a classic fold-and-thrust belt related to overall NNW-directed transport. In western Switzerland, the plunge of nappe fold axes and the regional distribution of units define a broad depression, the Rawil depression, between the culminations of Aiguilles Rouge massif to the SW and Aar massif to the NE. A compilation of data from the literature establishes that, in addition to thrusts related to nappe stacking, the Rawil depression is cross-cut by four sets of brittle faults: (1) SW-NE striking normal faults that strike parallel to the regional fold axis trend, (2) NW-SE striking normal faults and joints that strike perpendicular to the regional fold axis trend, and (3) WNW-ESE striking normal plus dextral oblique-slip faults as well as (4) WSW-ENE striking normal plus dextral oblique-slip faults that both strike oblique to the regional fold axis trend. We studied in detail a beautifully exposed fault from set 3, the Rezli fault zone (RFZ) in the central Wildhorn nappe. The RFZ is a shallow to moderately-dipping (ca. 30-60˚) fault zone with an oblique-slip displacement vector, combining both dextral and normal components. It must have formed in approximately this orientation, because the local orientation of fold axes corresponds to the regional one, as does the generally vertical orientation of extensional joints and veins associated with the regional fault set 2. The fault zone crosscuts four different lithologies: limestone, intercalated marl and limestone, marl and sandstone, and it has a maximum horizontal dextral offset component of ~300 m and a maximum vertical normal offset component of ~200 m. Its internal architecture strongly depends on the lithology in which it developed. In the limestone, it consists of veins, stylolites, cataclasites and cemented gouge, in the intercalated marls and limestones of anastomosing shear zones, brittle fractures, veins and folds, in the marls of anastomosing shear zones, pressure

  20. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  1. Fault Analysis in Cryptography

    CERN Document Server

    Joye, Marc

    2012-01-01

    In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

  2. Methods for recognition and segmentation of active fault

    International Nuclear Information System (INIS)

    Hyun, Chang Hun; Noh, Myung Hyun; Lee, Kieh Hwa; Chang, Tae Woo; Kyung, Jai Bok; Kim, Ki Young

    2000-03-01

    In order to identify and segment the active faults, the literatures of structural geology, paleoseismology, and geophysical explorations were investigated. The existing structural geological criteria for segmenting active faults were examined. These are mostly based on normal fault systems, thus, the additional criteria are demanded for application to different types of fault systems. Definition of the seismogenic fault, characteristics of fault activity, criteria and study results of fault segmentation, relationship between segmented fault length and maximum displacement, and estimation of seismic risk of segmented faults were examined in paleoseismic study. The history of earthquake such as dynamic pattern of faults, return period, and magnitude of the maximum earthquake originated by fault activity can be revealed by the study. It is confirmed through various case studies that numerous geophysical explorations including electrical resistivity, land seismic, marine seismic, ground-penetrating radar, magnetic, and gravity surveys have been efficiently applied to the recognition and segmentation of active faults

  3. A new Wolf-Rayet star and its circumstellar nebula in Aquila

    Science.gov (United States)

    Gvaramadze, V. V.; Kniazev, A. Y.; Hamann, W.-R.; Berdnikov, L. N.; Fabrika, S.; Valeev, A. F.

    2010-04-01

    We report the discovery of a new Wolf-Rayet star in Aquila via detection of its circumstellar nebula (reminiscent of ring nebulae associated with late WN stars) using the Spitzer Space Telescope archival data. Our spectroscopic follow-up of the central point source associated with the nebula showed that it is a WN7h star (we named it WR121b). We analysed the spectrum of WR121b by using the Potsdam Wolf-Rayet model atmospheres, obtaining a stellar temperature of ~=50kK. The stellar wind composition is dominated by helium with ~20 per cent of hydrogen. The stellar spectrum is highly reddened [E(B - V) = 2.85mag]. Adopting an absolute magnitude of Mv = -5.7, the star has a luminosity of logL/Lsolar = 5.75 and a mass-loss rate of 10-4.7Msolaryr-1, and resides at a distance of 6.3kpc. We searched for a possible parent cluster of WR121b and found that this star is located at ~=1° from the young star cluster embedded in the giant HII region W43 (containing a WN7+a/OB? star - WR121a). We also discovered a bow shock around the O9.5III star ALS9956, located at from the cluster. We discuss the possibility that WR121b and ALS9956 are runaway stars ejected from the cluster in W43. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC). E-mail: vgvaram@mx.iki.rssi.ru (VVG); akniazev@saao.ac.za (AYK); wrh@astro.physik.uni-potsdam.de (WRH); berdnik@sai.msu.ru (LNB); fabrika@sao.ru (SF); azamat@sao.ru (AFV)

  4. Distribution and nature of fault architecture in a layered sandstone and shale sequence: An example from the Moab fault, Utah

    Science.gov (United States)

    Davatzes, N.C.; Aydin, A.

    2005-01-01

    We examined the distribution of fault rock and damage zone structures in sandstone and shale along the Moab fault, a basin-scale normal fault with nearly 1 km (0.62 mi) of throw, in southeast Utah. We find that fault rock and damage zone structures vary along strike and dip. Variations are related to changes in fault geometry, faulted slip, lithology, and the mechanism of faulting. In sandstone, we differentiated two structural assemblages: (1) deformation bands, zones of deformation bands, and polished slip surfaces and (2) joints, sheared joints, and breccia. These structural assemblages result from the deformation band-based mechanism and the joint-based mechanism, respectively. Along the Moab fault, where both types of structures are present, joint-based deformation is always younger. Where shale is juxtaposed against the fault, a third faulting mechanism, smearing of shale by ductile deformation and associated shale fault rocks, occurs. Based on the knowledge of these three mechanisms, we projected the distribution of their structural products in three dimensions along idealized fault surfaces and evaluated the potential effect on fluid and hydrocarbon flow. We contend that these mechanisms could be used to facilitate predictions of fault and damage zone structures and their permeability from limited data sets. Copyright ?? 2005 by The American Association of Petroleum Geologists.

  5. Advanced cloud fault tolerance system

    Science.gov (United States)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  6. Frequency of fault occurrence at shallow depths during Plio-Pleistocene and estimation of the incident of new faults

    International Nuclear Information System (INIS)

    Shiratsuchi, H.; Yoshida, S.

    2009-01-01

    It is required that buried high-level radioactive wastes should not be broken directly by faulting in the future. Although a disposal site will be selected in an area where no active faults are present, the possibility of new fault occurrence in the site has to be evaluated. The probability of new fault occurrence is estimated from the frequency of faults which exist in Pliocene and Pleistocene strata distributed beneath 3 large plains in Japan, where a large number of seismic profiles and borehole data are obtained. Estimation of the frequency of faults having occurred and/or reached at shallow depth during Plio-Pleistocene time. The frequency of fault occurrence was estimated by counting the number of faults that exist in Plio-Pleistocene strata that are widely distributed in large plains in Japan. Three plains, Kanto, Nobi and Osaka Plains are selected for this purpose because highly precise geological profiles, which were prepared from numerous geological drillings and geophysical investigations, are available in them. (authors)

  7. Fault slip and earthquake recurrence along strike-slip faults — Contributions of high-resolution geomorphic data

    KAUST Repository

    Zielke, Olaf

    2015-01-01

    Understanding earthquake (EQ) recurrence relies on information about the timing and size of past EQ ruptures along a given fault. Knowledge of a fault\\'s rupture history provides valuable information on its potential future behavior, enabling seismic hazard estimates and loss mitigation. Stratigraphic and geomorphic evidence of faulting is used to constrain the recurrence of surface rupturing EQs. Analysis of the latter data sets culminated during the mid-1980s in the formulation of now classical EQ recurrence models, now routinely used to assess seismic hazard. Within the last decade, Light Detection and Ranging (lidar) surveying technology and other high-resolution data sets became increasingly available to tectono-geomorphic studies, promising to contribute to better-informed models of EQ recurrence and slip-accumulation patterns. After reviewing motivation and background, we outline requirements to successfully reconstruct a fault\\'s offset accumulation pattern from geomorphic evidence. We address sources of uncertainty affecting offset measurement and advocate approaches to minimize them. A number of recent studies focus on single-EQ slip distributions and along-fault slip accumulation patterns. We put them in context with paleoseismic studies along the respective faults by comparing coefficients of variation CV for EQ inter-event time and slip-per-event and find that a) single-event offsets vary over a wide range of length-scales and the sources for offset variability differ with length-scale, b) at fault-segment length-scales, single-event offsets are essentially constant, c) along-fault offset accumulation as resolved in the geomorphic record is dominated by essentially same-size, large offset increments, and d) there is generally no one-to-one correlation between the offset accumulation pattern constrained in the geomorphic record and EQ occurrence as identified in the stratigraphic record, revealing the higher resolution and preservation potential of

  8. Subsidence and Fault Displacement Along the Long Point Fault Derived from Continuous GPS Observations (2012-2017)

    Science.gov (United States)

    Tsibanos, V.; Wang, G.

    2017-12-01

    The Long Point Fault located in Houston Texas is a complex system of normal faults which causes significant damage to urban infrastructure on both private and public property. This case study focuses on the 20-km long fault using high accuracy continuously operating global positioning satellite (GPS) stations to delineate fault movement over five years (2012 - 2017). The Long Point Fault is the longest active fault in the greater Houston area that damages roads, buried pipes, concrete structures and buildings and creates a financial burden for the city of Houston and the residents who live in close vicinity to the fault trace. In order to monitor fault displacement along the surface 11 permanent and continuously operating GPS stations were installed 6 on the hanging wall and 5 on the footwall. This study is an overview of the GPS observations from 2013 to 2017. GPS positions were processed with both relative (double differencing) and absolute Precise Point Positioning (PPP) techniques. The PPP solutions that are referred to IGS08 reference frame were transformed to the Stable Houston Reference Frame (SHRF16). Our results show no considerable horizontal displacements across the fault, but do show uneven vertical displacement attributed to regional subsidence in the range of (5 - 10 mm/yr). This subsidence can be associated to compaction of silty clays in the Chicot and Evangeline aquifers whose water depths are approximately 50m and 80m below the land surface (bls). These levels are below the regional pre-consolidation head that is about 30 to 40m bls. Recent research indicates subsidence will continue to occur until the aquifer levels reach the pre-consolidation head. With further GPS observations both the Long Point Fault and regional land subsidence can be monitored providing important geological data to the Houston community.

  9. Two Trees: Migrating Fault Trees to Decision Trees for Real Time Fault Detection on International Space Station

    Science.gov (United States)

    Lee, Charles; Alena, Richard L.; Robinson, Peter

    2004-01-01

    We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.

  10. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  11. Fault Frictional Stability in a Nuclear Waste Repository

    Science.gov (United States)

    Orellana, Felipe; Violay, Marie; Scuderi, Marco; Collettini, Cristiano

    2016-04-01

    Exploitation of underground resources induces hydro-mechanical and chemical perturbations in the rock mass. In response to such disturbances, seismic events might occur, affecting the safety of the whole engineering system. The Mont Terri Rock Laboratory is an underground infrastructure devoted to the study of geological disposal of nuclear waste in Switzerland. At the site, it is intersected by large fault zones of about 0.8 - 3 m in thickness and the host rock formation is a shale rock named Opalinus Clay (OPA). The mineralogy of OPA includes a high content of phyllosilicates (50%), quartz (25%), calcite (15%), and smaller proportions of siderite and pyrite. OPA is a stiff, low permeable rock (2×10-18 m2), and its mechanical behaviour is strongly affected by the anisotropy induced by bedding planes. The evaluation of fault stability and associated fault slip behaviour (i.e. seismic vs. aseismic) is a major issue in order to ensure the long-term safety and operation of the repository. Consequently, experiments devoted to understand the frictional behaviour of OPA have been performed in the biaxial apparatus "BRAVA", recently developed at INGV. Simulated fault gouge obtained from intact OPA samples, were deformed at different normal stresses (from 4 to 30 MPa), under dry and fluid-saturated conditions. To estimate the frictional stability, the velocity-dependence of friction was evaluated during velocity steps tests (1-300 μm/s). Slide-hold-slide tests were performed (1-3000 s) to measure the amount of frictional healing. The collected data were subsequently modelled with the Ruina's slip dependent formulation of the rate and state friction constitutive equations. To understand the deformation mechanism, the microstructures of the sheared gouge were analysed. At 7 MPa normal stress and under dry conditions, the friction coefficient decreased from a peak value of μpeak,dry = 0.57 to μss,dry = 0.50. Under fluid-saturated conditions and same normal stress, the

  12. Crustal Density Variation Along the San Andreas Fault Controls Its Secondary Faults Distribution and Dip Direction

    Science.gov (United States)

    Yang, H.; Moresi, L. N.

    2017-12-01

    The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient 200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.

  13. Re-evaluating fault zone evolution, geometry, and slip rate along the restraining bend of the southern San Andreas Fault Zone

    Science.gov (United States)

    Blisniuk, K.; Fosdick, J. C.; Balco, G.; Stone, J. O.

    2017-12-01

    This study presents new multi-proxy data to provide an alternative interpretation of the late -to-mid Quaternary evolution, geometry, and slip rate of the southern San Andreas fault zone, comprising of the Garnet Hill, Banning, and Mission Creek fault strands, along its restraining bend near the San Bernardino Mountains and San Gorgonio Pass. Present geologic and geomorphic studies in the region indicate that as the Mission Creek and Banning faults diverge from one another in the southern Indio Hills, the Banning Fault Strand accommodates the majority of lateral displacement across the San Andreas Fault Zone. In this currently favored kinematic model of the southern San Andreas Fault Zone, slip along the Mission Creek Fault Strand decreases significantly northwestward toward the San Gorgonio Pass. Along this restraining bend, the Mission Creek Fault Strand is considered to be inactive since the late -to-mid Quaternary ( 500-150 kya) due to the transfer of plate boundary strain westward to the Banning and Garnet Hills Fault Strands, the Jacinto Fault Zone, and northeastward, to the Eastern California Shear Zone. Here, we present a revised geomorphic interpretation of fault displacement, initial 36Cl/10Be burial ages, sediment provenance data, and detrital geochronology from modern catchments and displaced Quaternary deposits that improve across-fault correlations. We hypothesize that continuous large-scale translation of this structure has occurred throughout its history into the present. Accordingly, the Mission Creek Fault Strand is active and likely a primary plate boundary fault at this latitude.

  14. Young Researcher Meeting, L'Aquila 2015

    International Nuclear Information System (INIS)

    Agostini, F; Antolini, C; Bossa, M; Cattani, G; Dell'Oro, S; D'Angelo, M; Di Stefano, M; Fragione, G; Migliaccio, M; Pagnanini, L; Pietrobon, D; Pusceddu, E; Serra, M; Stellato, F

    2016-01-01

    The Young Researcher Meeting (www.yrmr.it) has been established as a forum for students, postdoctoral fellows and young researchers determined to play a proactive role in the scientific progress. Since 2009, we run itinerant, yearly meetings to discuss the most recent developments and achievements in physics, as we are firmly convinced that sharing expertise and experience is the foundation of the research activity. One of the main purposes of the conference is actually to create an international network of young researchers, both experimentalists and theorists, and fruitful collaborations across the different branches of physics. The format we chose is an informal meeting primarily aimed at students and postdoctoral researchers at the beginning of their scientific career, who are encouraged to present their work in brief presentations able to provide genuine engagement of the audience and cross-pollination of ideas. The sixth edition of the Young Researcher Meeting was held at the Gran Sasso Science Institute (GSSI), L'Aquila. The high number of valuable contributions gave rise to a busy program for a two-day conference on October 12 th -13 th . The event gathered 70 participants from institutes all around the world. The plenary talk sessions covered several areas of pure and applied physics, and they were complemented by an extremely rich and interactive poster session. This year's edition of the meeting also featured a lectio magistralis by Prof. E. Coccia, director of the GSSI, who discussed the frontiers in gravitational wave physics, commemorating the International Year of Light on the centenary of Einstein's theory of general relativity. On October 14 th , the participants to the conference took part to a guided tour of the Gran Sasso National Laboratories (LNGS), one of the major particle physics laboratories in the world. In this volume, we collect part of the contributions that have been presented at the conference as either talks or

  15. Development of methods for evaluating active faults

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The report for long-term evaluation of active faults was published by the Headquarters for Earthquake Research Promotion on Nov. 2010. After occurrence of the 2011 Tohoku-oki earthquake, the safety review guide with regard to geology and ground of site was revised by the Nuclear Safety Commission on Mar. 2012 with scientific knowledges of the earthquake. The Nuclear Regulation Authority established on Sep. 2012 is newly planning the New Safety Design Standard related to Earthquakes and Tsunamis of Light Water Nuclear Power Reactor Facilities. With respect to those guides and standards, our investigations for developing the methods of evaluating active faults are as follows; (1) For better evaluation on activities of offshore fault, we proposed a work flow to date marine terrace (indicator for offshore fault activity) during the last 400,000 years. We also developed the analysis of fault-related fold for evaluating of blind fault. (2) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (3) To reduce uncertainties of fault activities and frequency of earthquakes, we compiled the survey data and possible errors. (4) For improving seismic hazard analysis, we compiled the fault activities of the Yunotake and Itozawa faults, induced by the 2011 Tohoku-oki earthquake. (author)

  16. Methodology for Designing Fault-Protection Software

    Science.gov (United States)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  17. Fault detection and fault tolerant control of a smart base isolation system with magneto-rheological damper

    International Nuclear Information System (INIS)

    Wang, Han; Song, Gangbing

    2011-01-01

    Fault detection and isolation (FDI) in real-time systems can provide early warnings for faulty sensors and actuator signals to prevent events that lead to catastrophic failures. The main objective of this paper is to develop FDI and fault tolerant control techniques for base isolation systems with magneto-rheological (MR) dampers. Thus, this paper presents a fixed-order FDI filter design procedure based on linear matrix inequalities (LMI). The necessary and sufficient conditions for the existence of a solution for detecting and isolating faults using the H ∞ formulation is provided in the proposed filter design. Furthermore, an FDI-filter-based fuzzy fault tolerant controller (FFTC) for a base isolation structure model was designed to preserve the pre-specified performance of the system in the presence of various unknown faults. Simulation and experimental results demonstrated that the designed filter can successfully detect and isolate faults from displacement sensors and accelerometers while maintaining excellent performance of the base isolation technology under faulty conditions

  18. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  19. Case-Based Fault Diagnostic System

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2014-01-01

    Nowadays, case-based fault diagnostic (CBFD) systems have become important and widely applied problem solving technologies. They are based on the assumption that “similar faults have similar diagnosis”. On the other hand, CBFD systems still suffer from some limitations. Common ones of them are: (1) failure of CBFD to have the needed diagnosis for the new faults that have no similar cases in the case library. (2) Limited memorization when increasing the number of stored cases in the library. The proposed research introduces incorporating the neural network into the case based system to enable the system to diagnose all the faults. Neural networks have proved their success in the classification and diagnosis problems. The suggested system uses the neural network to diagnose the new faults (cases) that cannot be diagnosed by the traditional CBR diagnostic system. Besides, the proposed system can use the another neural network to control adding and deleting the cases in the library to manage the size of the cases in the case library. However, the suggested system has improved the performance of the case based fault diagnostic system when applied for the motor rolling bearing as a case of study

  20. Passive fault current limiting device

    Science.gov (United States)

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  1. Probability intervals for the top event unavailability of fault trees

    International Nuclear Information System (INIS)

    Lee, Y.T.; Apostolakis, G.E.

    1976-06-01

    The evaluation of probabilities of rare events is of major importance in the quantitative assessment of the risk from large technological systems. In particular, for nuclear power plants the complexity of the systems, their high reliability and the lack of significant statistical records have led to the extensive use of logic diagrams in the estimation of low probabilities. The estimation of probability intervals for the probability of existence of the top event of a fault tree is examined. Given the uncertainties of the primary input data, a method is described for the evaluation of the first four moments of the top event occurrence probability. These moments are then used to estimate confidence bounds by several approaches which are based on standard inequalities (e.g., Tchebycheff, Cantelli, etc.) or on empirical distributions (the Johnson family). Several examples indicate that the Johnson family of distributions yields results which are in good agreement with those produced by Monte Carlo simulation

  2. Neuroadaptive Fault-Tolerant Control of Nonlinear Systems Under Output Constraints and Actuation Faults.

    Science.gov (United States)

    Zhao, Kai; Song, Yongduan; Shen, Zhixi

    2018-02-01

    In this paper, a neuroadaptive fault-tolerant tracking control method is proposed for a class of time-delay pure-feedback systems in the presence of external disturbances and actuation faults. The proposed controller can achieve prescribed transient and steady-state performance, despite uncertain time delays and output constraints as well as actuation faults. By combining a tangent barrier Lyapunov-Krasovskii function with the dynamic surface control technique, the neural network unit in the developed control scheme is able to take its action from the very beginning and play its learning/approximating role safely during the entire system operational envelope, leading to enhanced control performance without the danger of violating compact set precondition. Furthermore, prescribed transient performance and output constraints are strictly ensured in the presence of nonaffine uncertainties, external disturbances, and undetectable actuation faults. The control strategy is also validated by numerical simulation.

  3. A low-angle detachment fault revealed: Three-dimensional images of the S-reflector fault zone along the Galicia passive margin

    Science.gov (United States)

    Schuba, C. Nur; Gray, Gary G.; Morgan, Julia K.; Sawyer, Dale S.; Shillington, Donna J.; Reston, Tim J.; Bull, Jonathan M.; Jordan, Brian E.

    2018-06-01

    A new 3-D seismic reflection volume over the Galicia margin continent-ocean transition zone provides an unprecedented view of the prominent S-reflector detachment fault that underlies the outer part of the margin. This volume images the fault's structure from breakaway to termination. The filtered time-structure map of the S-reflector shows coherent corrugations parallel to the expected paleo-extension directions with an average azimuth of 107°. These corrugations maintain their orientations, wavelengths and amplitudes where overlying faults sole into the S-reflector, suggesting that the parts of the detachment fault containing multiple crustal blocks may have slipped as discrete units during its late stages. Another interface above the S-reflector, here named S‧, is identified and interpreted as the upper boundary of the fault zone associated with the detachment fault. This layer, named the S-interval, thickens by tens of meters from SE to NW in the direction of transport. Localized thick accumulations also occur near overlying fault intersections, suggesting either non-uniform fault rock production, or redistribution of fault rock during slip. These observations have important implications for understanding how detachment faults form and evolve over time. 3-D seismic reflection imaging has enabled unique insights into fault slip history, fault rock production and redistribution.

  4. Active fault detection in MIMO systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    The focus in this paper is on active fault detection (AFD) for MIMO systems with parametric faults. The problem of design of auxiliary inputs with respect to detection of parametric faults is investigated. An analysis of the design of auxiliary inputs is given based on analytic transfer functions...... from auxiliary input to residual outputs. The analysis is based on a singular value decomposition of these transfer functions Based on this analysis, it is possible to design auxiliary input as well as design of the associated residual vector with respect to every single parametric fault in the system...... such that it is possible to detect these faults....

  5. Computer aided construction of fault tree

    International Nuclear Information System (INIS)

    Kovacs, Z.

    1982-01-01

    Computer code CAT for the automatic construction of the fault tree is briefly described. Code CAT makes possible simple modelling of components using decision tables, it accelerates the fault tree construction process, constructs fault trees of different complexity, and is capable of harmonized co-operation with programs PREPandKITT 1,2 for fault tree analysis. The efficiency of program CAT and thus the accuracy and completeness of fault trees constructed significantly depends on the compilation and sophistication of decision tables. Currently, program CAT is used in co-operation with programs PREPandKITT 1,2 in reliability analyses of nuclear power plant systems. (B.S.)

  6. Comparison of γ-ray intensity distribution around Hira fault with spatial pattern of major and/or sub fault system

    International Nuclear Information System (INIS)

    Nakanishi, Tatsuya; Mino, Kazuo; Ogasawara, Hiroshi; Katsura, Ikuo

    1999-01-01

    Major active faults generally consist of systems of a number of fractures with various dimensions, and contain a lot of ground water. Rn gas, moving with underground water, tends to accumulate along faults and emit γ-ray while it decays down to Pb through Bi. Therefore, it has been shown by a number of works that γ-ray intensity is generally high near the core of the major active fault and the γ-ray survey is one of the effective methods to look for the core of the major active fault. However, around the area near the tips of faults, a number of complicated sub-fault systems and the corresponding complicated geological structures are often seen and it has not been investigated well about what can be the relationship between the intensity distribution of γ-ray and the fault systems. In order to investigate the relationship in an area near the tips of major faults well, therefore, we carried out the γ-ray survey at about 1,100 sites in an area of about 2 km x 2 km that has the tips of the two major right lateral faults with significant thrusting components. We also investigated the lineaments by using the topographic map published in 1895 when artificial construction was seldom seen in the area and we can easily see the natural topography. In addition, we carried out the γ-ray survey in an area far from the fault tip to compare with the results in the area with the fault tips. Then: (1) we reconfirmed that in the case of the middle of the major active fault, γ-ray intensity is high in the limited area just adjacent to the core of the fault. (2) However, we found that in the case of the tip of the major active fault, high γ-ray intensity is seen in much wider area with clear lineaments that is inferred to be developed associated with the movement of the major faults. (author)

  7. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    Science.gov (United States)

    Bird, P.

    2010-12-01

    The hope expressed in the title question above can be contradicted in 5 ways, listed below. To summarize, an earthquake rupture can be larger than anticipated either because the fault system has not been fully mapped, or because the rupture is not limited to the pre-existing fault network. 1. Geologic mapping of faults is always incomplete due to four limitations: (a) Map-scale limitation: Faults below a certain (scale-dependent) apparent offset are omitted; (b) Field-time limitation: The most obvious fault(s) get(s) the most attention; (c) Outcrop limitation: You can't map what you can't see; and (d) Lithologic-contrast limitation: Intra-formation faults can be tough to map, so they are often assumed to be minor and omitted. If mapping is incomplete, fault traces may be longer and/or better-connected than we realize. 2. Fault trace “lengths” are unreliable guides to maximum magnitude. Fault networks have multiply-branching, quasi-fractal shapes, so fault “length” may be meaningless. Naming conventions for main strands are unclear, and rarely reviewed. Gaps due to Quaternary alluvial cover may not reflect deeper seismogenic structure. Mapped kinks and other “segment boundary asperities” may be only shallow structures. Also, some recent earthquakes have jumped and linked “separate” faults (Landers, California 1992; Denali, Alaska, 2002) [Wesnousky, 2006; Black, 2008]. 3. Distributed faulting (“eventually occurring everywhere”) is predicted by several simple theories: (a) Viscoelastic stress redistribution in plate/microplate interiors concentrates deviatoric stress upward until they fail by faulting; (b) Unstable triple-junctions (e.g., between 3 strike-slip faults) in 2-D plate theory require new faults to form; and (c) Faults which appear to end (on a geologic map) imply distributed permanent deformation. This means that all fault networks evolve and that even a perfect fault map would be incomplete for future ruptures. 4. A recent attempt

  8. Fault tectonics and earthquake hazards in parts of southern California. [penninsular ranges, Garlock fault, Salton Trough area, and western Mojave Desert

    Science.gov (United States)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.; Gazley, C., Jr.; Lamar, J. V.; Stratton, R. H.

    1976-01-01

    The author has identified the following significant results. Four previously unknown faults were discovered in basement terrane of the Peninsular Ranges. These have been named the San Ysidro Creek fault, Thing Valley fault, Canyon City fault, and Warren Canyon fault. In addition fault gouge and breccia were recognized along the San Diego River fault. Study of features on Skylab imagery and review of geologic and seismic data suggest that the risk of a damaging earthquake is greater along the northwestern portion of the Elsinore fault than along the southeastern portion. Physiographic indicators of active faulting along the Garlock fault identifiable in Skylab imagery include scarps, linear ridges, shutter ridges, faceted ridges, linear valleys, undrained depressions and offset drainage. The following previously unrecognized fault segments are postulated for the Salton Trough Area: (1) An extension of a previously known fault in the San Andreas fault set located southeast of the Salton Sea; (2) An extension of the active San Jacinto fault zone along a tonal change in cultivated fields across Mexicali Valley ( the tonal change may represent different soil conditions along opposite sides of a fault). For the Skylab and LANDSAT images studied, pseudocolor transformations offer no advantages over the original images in the recognition of faults in Skylab and LANDSAT images. Alluvial deposits of different ages, a marble unit and iron oxide gossans of the Mojave Mining District are more readily differentiated on images prepared from ratios of individual bands of the S-192 multispectral scanner data. The San Andreas fault was also made more distinct in the 8/2 and 9/2 band ratios by enhancement of vegetation differences on opposite sides of the fault. Preliminary analysis indicates a significant earth resources potential for the discrimination of soil and rock types, including mineral alteration zones. This application should be actively pursued.

  9. Fault Diagnosis and Fault Tolerant Control with Application on a Wind Turbine Low Speed Shaft Encoder

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Sardi, Hector Eloy Sanchez; Escobet, Teressa

    2015-01-01

    tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

  10. Structural setting and kinematics of Nubian fault system, SE Western Desert, Egypt: An example of multi-reactivated intraplate strike-slip faults

    Science.gov (United States)

    Sakran, Shawky; Said, Said Mohamed

    2018-02-01

    Detailed surface geological mapping and subsurface seismic interpretation have been integrated to unravel the structural style and kinematic history of the Nubian Fault System (NFS). The NFS consists of several E-W Principal Deformation Zones (PDZs) (e.g. Kalabsha fault). Each PDZ is defined by spectacular E-W, WNW and ENE dextral strike-slip faults, NNE sinistral strike-slip faults, NE to ENE folds, and NNW normal faults. Each fault zone has typical self-similar strike-slip architecture comprising multi-scale fault segments. Several multi-scale uplifts and basins were developed at the step-over zones between parallel strike-slip fault segments as a result of local extension or contraction. The NNE faults consist of right-stepping sinistral strike-slip fault segments (e.g. Sin El Kiddab fault). The NNE sinistral faults extend for long distances ranging from 30 to 100 kms and cut one or two E-W PDZs. Two nearly perpendicular strike-slip tectonic regimes are recognized in the NFS; an inactive E-W Late Cretaceous - Early Cenozoic dextral transpression and an active NNE sinistral shear.

  11. Common faults in turbines and applying neural networks in order to fault diagnostic by vibration analysis

    International Nuclear Information System (INIS)

    Masoudifar, M.; AghaAmini, M.

    2001-01-01

    Today the fault diagnostic of the rotating machinery based on the vibration analysis is an effective method in designing predictive maintenance programs. In this method, vibration level of the turbines is monitored and if it is higher than the allowable limit, vibrational data will be analyzed and the growing faults will be detected. But because of the high complexity of the system monitoring, the interpretation of the measured data is more difficult. Therefore, design of the fault diagnostic expert systems by using the expert's technical experiences and knowledge; seem to be the best solution. In this paper,at first several common faults in turbines are studied and the how applying the neural networks to interpret the vibrational data for fault diagnostic is explained

  12. Three-Dimensional Growth of Flexural Slip Fault-Bend and Fault-Propagation Folds and Their Geomorphic Expression

    Directory of Open Access Journals (Sweden)

    Asdrúbal Bernal

    2018-03-01

    Full Text Available The three-dimensional growth of fault-related folds is known to be an important process during the development of compressive mountain belts. However, comparatively little is known concerning the manner in which fold growth is expressed in topographic relief and local drainage networks. Here we report results from a coupled kinematic and surface process model of fault-related folding. We consider flexural slip fault-bend and fault-propagation folds that grow in both the transport and strike directions, linked to a surface process model that includes bedrock channel development and hillslope diffusion. We investigate various modes of fold growth under identical surface process conditions and critically analyse their geomorphic expression. Fold growth results in the development of steep forelimbs and gentler, wider backlimbs resulting in asymmetric drainage basin development (smaller basins on forelimbs, larger basins on backlimbs. However, topographies developed above fault-propagation folds are more symmetric than those developed above fault-bend folds as a result of their different forelimb kinematics. In addition, the surface expression of fault-bend and fault-propagation folds depends both on the slip distribution along the fault and on the style of fold growth. When along-strike plunge is a result of slip events with gently decreasing slip towards the fault tips (with or without lateral propagation, large plunge-panel drainage networks are developed at the expense of backpanel (transport-opposing and forepanel (transport-facing drainage basins. In contrast, if the fold grows as a result of slip events with similar displacements along strike, plunge-panel drainage networks are poorly developed (or are transient features of early fold growth and restricted to lateral fold terminations, particularly when the number of propagation events is small. The absence of large-scale plunge-panel drainage networks in natural examples suggests that the

  13. Analysis of the fault geometry of a Cenozoic salt-related fault close to the D-1 well, Danish North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Roenoe Clausen, O.; Petersen, K.; Korstgaard, A.

    1995-12-31

    A normal detaching fault in the Norwegian-Danish Basin around the D-1 well (the D-1 faults) has been mapped using seismic sections. The fault has been analysed in detail by constructing backstripped-decompacted sections across the fault, contoured displacement diagrams along the fault, and vertical displacement maps. The result shows that the listric D-1 fault follows the displacement patterns for blind normal faults. Deviations from the ideal displacement pattern is suggested to be caused by salt-movements, which is the main driving mechanisms for the faulting. Zechstein salt moves primarily from the hanging wall to the footwall and is superposed by later minor lateral flow beneath the footwall. Back-stripping of depth-converted and decompacted sections results in an estimation of the salt-surface and the shape of the fault through time. This procedure then enables a simple modelling of the hanging wall deformation using a Chevron model with hanging wall collapse along dipping surfaces. The modelling indicates that the fault follows the salt surface until the Middle Miocene after which the offset on the fault also may be accommodated along the Top Chalk surface. (au) 16 refs.

  14. Quantitative evaluation of fault coverage for digitalized systems in NPPs using simulated fault injection method

    International Nuclear Information System (INIS)

    Kim, Suk Joon

    2004-02-01

    Even though digital systems have numerous advantages such as precise processing of data, enhanced calculation capability over the conventional analog systems, there is a strong restriction on the application of digital systems to the safety systems in nuclear power plants (NPPs). This is because we do not fully understand the reliability of digital systems, and therefore we cannot guarantee the safety of digital systems. But, as the need for introduction of digital systems to safety systems in NPPs increasing, the need for the quantitative analysis on the safety of digital systems is also increasing. NPPs, which are quite conservative in terms of safety, require proving the reliability of digital systems when applied them to the NPPs. Moreover, digital systems which are applied to the NPPs are required to increase the overall safety of NPPs. however, it is very difficult to evaluate the reliability of digital systems because they include the complex fault processing mechanisms at various levels of the systems. Software is another obstacle in reliability assessment of the systems that requires ultra-high reliability. In this work, the fault detection coverage for the digital system is evaluated using simulated fault injection method. The target system is the Local Coincidence Logic (LCL) processor in Digital Plant Protection System (DPPS). However, as the LCL processor is difficult to design equally for evaluating the fault detection coverage, the LCL system has to be simplified. The simulations for evaluating the fault detection coverage of components are performed by dividing into two cases and the failure rates of components are evaluated using MIL-HDBK-217F. Using these results, the fault detection coverage of simplified LCL system is evaluated. In the experiments, heartbeat signals were just emitted at regular interval after executing logic without self-checking algorithm. When faults are injected into the simplified system, fault occurrence can be detected by

  15. Dynamic modeling of gearbox faults: A review

    Science.gov (United States)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  16. Stability of fault submitted to fluid injections

    Science.gov (United States)

    Brantut, N.; Passelegue, F. X.; Mitchell, T. M.

    2017-12-01

    Elevated pore pressure can lead to slip reactivation on pre-existing fractures and faults when the coulomb failure point is reached. From a static point of view, the reactivation of fault submitted to a background stress (τ0) is a function of the peak strength of the fault, i.e. the quasi-static effective friction coefficient (µeff). However, this theory is valid only when the entire fault is affected by fluid pressure, which is not the case in nature, and during human induced-seismicity. In this study, we present new results about the influence of the injection rate on the stability of faults. Experiments were conducted on a saw-cut sample of westerly granite. The experimental fault was 8 cm length. Injections were conducted through a 2 mm diameter hole reaching the fault surface. Experiments were conducted at four different order magnitudes fluid pressure injection rates (from 1 MPa/minute to 1 GPa/minute), in a fault system submitted to 50 and 100 MPa confining pressure. Our results show that the peak fluid pressure leading to slip depends on injection rate. The faster the injection rate, the larger the peak fluid pressure leading to instability. Wave velocity surveys across the fault highlighted that decreasing the injection-rate leads to an increase of size of the fluid pressure perturbation. Our result demonstrate that the stability of the fault is not only a function of the fluid pressure requires to reach the failure criterion, but is mainly a function of the ratio between the length of the fault affected by fluid pressure and the total fault length. In addition, we show that the slip rate increases with the background effective stress and with the intensity of the fluid pressure pertubation, i.e. with the excess shear stress acting on the part of the fault pertubated by fluid injection. Our results suggest that crustal fault can be reactivated by local high fluid overpressures. These results could explain the "large" magnitude human-induced earthquakes

  17. Interseismic Strain Accumulation of the Gazikoy-Saros segment (Ganos fault) of the North Anatolian Fault Zone

    Science.gov (United States)

    Havazli, E.; Wdowinski, S.; Amelung, F.

    2017-12-01

    The North Anatolian Fault Zone (NAFZ) is one of the most active continental transform faults in the world. A westward migrating earthquake sequence has started in 1939 in Erzincan and the last two events of this sequence occurred in 1999 in Izmit and Duzce manifesting the importance of NAFZ on the seismic hazard potential of the region. NAFZ exhibits slip rates ranging from 14-30 mm/yr along its 1500 km length with a right lateral strike slip characteristic. In the East of the Marmara Sea, the NAFZ splits into two branches. The Gazikoy-Saros segment (Ganos Fault) is the westernmost and onshore segment of the northern branch. The ENE-WSW oriented Ganos Fault is seismically active. It produced a Ms 7.2 earthquake in 1912, which was followed by several large aftershocks, including Ms 6.3 and Ms 6.9 events. Since 1912, the Ganos Fault did not produce any significant earthquakes (> M 5), in contrast to its adjacent segments, which produced 20 M>5 earthquakes, including a M 6.7 event, offshore in Gulf of Saros. Interseismic strain accumulation along the Ganos Fault was assessed from sparse GPS measurements along a single transect located perpendicular to the fault zone, suggesting strain accumulation rate of 20-25 mm/yr. Insofar, InSAR studies, based on C-band data, didn't produce conclusive results due to low coherence over the fault zone area, which is highly vegetated. In this study, we present a detailed interseismic velocity map of the Ganos Fault zone derived from L-band InSAR observations. We use 21 ALOS PALSAR scenes acquired over a 5-year period, from 2007 to 2011. We processed the ALOS data using the PySAR software, which is the University of Miami version of the Small Baseline (SB) method. The L-band observations enabled us to overcome the coherence issue in the study area. Our initial results indicate a maximum velocity of 15 mm/yr across the fault zone. The high spatial resolution of the InSAR-based interseismic velocity map will enable us to better to

  18. Fault diagnosis based on controller modification

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2015-01-01

    Detection and isolation of parametric faults in closed-loop systems will be considered in this paper. A major problem is that a feedback controller will in general reduce the effects from variations in the systems including parametric faults on the controlled output from the system. Parametric...... faults can be detected and isolated using active methods, where an auxiliary input is applied. Using active methods for the diagnosis of parametric faults in closed-loop systems, the amplitude of the applied auxiliary input need to be increased to be able to detect and isolate the faults in a reasonable......-parameterization (after Youla, Jabr, Bongiorno and Kucera) for the controller, it is possible to modify the feedback controller with a minor effect on the closed-loop performance in the fault-free case and at the same time optimize the detection and isolation in a faulty case. Controller modification in connection...

  19. A setup for active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2006-01-01

    A setup for active fault diagnosis (AFD) of parametric faults in dynamic systems is formulated in this paper. It is shown that it is possible to use the same setup for both open loop systems, closed loop systems based on a nominal feedback controller as well as for closed loop systems based...... on a reconfigured feedback controller. This will make the proposed AFD approach very useful in connection with fault tolerant control (FTC). The setup will make it possible to let the fault diagnosis part of the fault tolerant controller remain unchanged after a change in the feedback controller. The setup for AFD...... is based on the YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all stabilizing feedback controllers and the dual YJBK parameterization. It is shown that the AFD is based directly on the dual YJBK transfer function matrix. This matrix will be named the fault signature matrix when...

  20. Active fault diagnosis in closed-loop uncertain systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2006-01-01

    Fault diagnosis of parametric faults in closed-loop uncertain systems by using an auxiliary input vector is considered in this paper, i.e. active fault diagnosis (AFD). The active fault diagnosis is based directly on the socalled fault signature matrix, related to the YJBK (Youla, Jabr, Bongiorno...... and Kucera) parameterization. Conditions are given for exact detection and isolation of parametric faults in closed-loop uncertain systems....

  1. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  2. Active fault diagnosis in closed-loop systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2005-01-01

    Active fault diagnosis (AFD) of parametric faults is considered in connection with closed loop feedback systems. AFD involves auxiliary signals applied on the closed loop system. A fault signature matrix is introduced in connection with AFD and it is shown that if a limited number of faults can...

  3. Automatic fault tracing of active faults in the Sutlej valley (NW-Himalayas, India)

    Science.gov (United States)

    Janda, C.; Faber, R.; Hager, C.; Grasemann, B.

    2003-04-01

    In the Sutlej Valley the Lesser Himalayan Crystalline Sequence (LHCS) is actively extruding between the Munsiari Thrust (MT) at the base, and the Karcham Normal Fault (KNF) at the top. The clear evidences for ongoing deformation are brittle faults in Holocene lake deposits, hot springs activity near the faults and dramatically younger cooling ages within the LHCS (Vannay and Grasemann, 2001). Because these brittle fault zones obviously influence the morphology in the field we developed a new method for automatically tracing the intersections of planar fault geometries with digital elevation models (Faber, 2002). Traditional mapping techniques use structure contours (i.e. lines or curves connecting points of equal elevation on a geological structure) in order to construct intersections of geological structures with topographic maps. However, even if the geological structure is approximated by a plane and therefore structure contours are equally spaced lines, this technique is rather time consuming and inaccurate, because errors are cumulative. Drawing structure contours by hand makes it also impossible to slightly change the azimuth and dip direction of the favoured plane without redrawing everything from the beginning on. However, small variations of the fault position which are easily possible by either inaccuracies of measurement in the field or small local variations in the trend and/or dip of the fault planes can have big effects on the intersection with topography. The developed method allows to interactively view intersections in a 2D and 3D mode. Unlimited numbers of planes can be moved separately in 3 dimensions (translation and rotation) and intersections with the topography probably following morphological features can be mapped. Besides the increase of efficiency this method underlines the shortcoming of classical lineament extraction ignoring the dip of planar structures. Using this method, areas of active faulting influencing the morphology, can be

  4. Meteoric water in normal fault systems: Oxygen and hydrogen isotopic measurements on authigenic phases in brittle fault rocks

    Science.gov (United States)

    Haines, S. H.; Anderson, R.; Mulch, A.; Solum, J. G.; Valley, J. W.; van der Pluijm, B. A.

    2009-12-01

    The nature of fluid circulation systems in normal fault systems is fundamental to understanding the nature of fluid movement within the upper crust, and has important implications for the on-going controversy about the strength of faults. Authigenic phases in clay gouges and fault breccias record the isotopic signature of the fluids they formed in equilibrium with, and can be used to understand the ‘plumbing system’ of brittle fault environments. We obtained paired oxygen and hydrogen isotopic measurements on authigenic illite and/or smectite in clay gouge from normal faults in two geologic environments, 1.) low-angle normal faults (Ruby Mountains detachment, NV; Badwater Turtleback, CA; Panamint range-front detachment; CA; Amargosa detachment; CA; Waterman Hills detachment, CA), and 2.) An intracratonic high-angle normal fault (Moab Fault, UT). All authigenic phases in these clay gouges are moderately light isotopically with respect to oxygen (illite δ18O -2.0 - + 11.5 ‰ SMOW, smectite δ18O +3.6 and 17.9 ‰) and very light isotopically with respect to hydrogen (illite δD -148 to -98 ‰ SMOW, smectite δD -147 to -92 ‰). Fluid compositions calculated from the authigenic clays at temperatures of 50 - 130 ○C (as indicated by clay mineralogy) indicate that both illite and smectite in normal fault clay gouge formed in the presence of near-pristine to moderately-evolved meteoric fluids and that igneous or metamorphic fluids are not involved in clay gouge formation in these normal fault settings. We also obtained paired oxygen and hydrogen isotopic measurements on chlorites derived from footwall chlorite breccias in 4 low-angle normal fault detachment systems (Badwater and Mormon Point Turtlebacks, CA, the Chemehuevi detachment, CA, and the Buckskin-Rawhide detachment, AZ). All chlorites are isotopically light to moderately light with respect to oxygen (δ18O +0.29 to +8.1 ‰ SMOW) and very light with respect to hydrogen (δD -97 to -113 ‰) and indicate

  5. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  6. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  7. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  8. Fault Locating, Prediction and Protection (FLPPS)

    Energy Technology Data Exchange (ETDEWEB)

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  9. Active fault tolerance control of a wind turbine system using an unknown input observer with an actuator fault

    Directory of Open Access Journals (Sweden)

    Li Shanzhi

    2018-03-01

    Full Text Available This paper proposes a fault tolerant control scheme based on an unknown input observer for a wind turbine system subject to an actuator fault and disturbance. Firstly, an unknown input observer for state estimation and fault detection using a linear parameter varying model is developed. By solving linear matrix inequalities (LMIs and linear matrix equalities (LMEs, the gains of the unknown input observer are obtained. The convergence of the unknown input observer is also analysed with Lyapunov theory. Secondly, using fault estimation, an active fault tolerant controller is applied to a wind turbine system. Finally, a simulation of a wind turbine benchmark with an actuator fault is tested for the proposed method. The simulation results indicate that the proposed FTC scheme is efficient.

  10. Fault Injection and Monitoring Capability for a Fault-Tolerant Distributed Computation System

    Science.gov (United States)

    Torres-Pomales, Wilfredo; Yates, Amy M.; Malekpour, Mahyar R.

    2010-01-01

    The Configurable Fault-Injection and Monitoring System (CFIMS) is intended for the experimental characterization of effects caused by a variety of adverse conditions on a distributed computation system running flight control applications. A product of research collaboration between NASA Langley Research Center and Old Dominion University, the CFIMS is the main research tool for generating actual fault response data with which to develop and validate analytical performance models and design methodologies for the mitigation of fault effects in distributed flight control systems. Rather than a fixed design solution, the CFIMS is a flexible system that enables the systematic exploration of the problem space and can be adapted to meet the evolving needs of the research. The CFIMS has the capabilities of system-under-test (SUT) functional stimulus generation, fault injection and state monitoring, all of which are supported by a configuration capability for setting up the system as desired for a particular experiment. This report summarizes the work accomplished so far in the development of the CFIMS concept and documents the first design realization.

  11. Aeromagnetic anomalies over faulted strata

    Science.gov (United States)

    Grauch, V.J.S.; Hudson, Mark R.

    2011-01-01

    High-resolution aeromagnetic surveys are now an industry standard and they commonly detect anomalies that are attributed to faults within sedimentary basins. However, detailed studies identifying geologic sources of magnetic anomalies in sedimentary environments are rare in the literature. Opportunities to study these sources have come from well-exposed sedimentary basins of the Rio Grande rift in New Mexico and Colorado. High-resolution aeromagnetic data from these areas reveal numerous, curvilinear, low-amplitude (2–15 nT at 100-m terrain clearance) anomalies that consistently correspond to intrasedimentary normal faults (Figure 1). Detailed geophysical and rock-property studies provide evidence for the magnetic sources at several exposures of these faults in the central Rio Grande rift (summarized in Grauch and Hudson, 2007, and Hudson et al., 2008). A key result is that the aeromagnetic anomalies arise from the juxtaposition of magnetically differing strata at the faults as opposed to chemical processes acting at the fault zone. The studies also provide (1) guidelines for understanding and estimating the geophysical parameters controlling aeromagnetic anomalies at faulted strata (Grauch and Hudson), and (2) observations on key geologic factors that are favorable for developing similar sedimentary sources of aeromagnetic anomalies elsewhere (Hudson et al.).

  12. Improved DFIG Capability during Asymmetrical Grid Faults

    DEFF Research Database (Denmark)

    Zhou, Dao; Blaabjerg, Frede

    2015-01-01

    In the wind power application, different asymmetrical types of the grid fault can be categorized after the Y/d transformer, and the positive and negative components of a single-phase fault, phase-to-phase fault, and two-phase fault can be summarized. Due to the newly introduced negative and even...... the natural component of the Doubly-Fed Induction Generator (DFIG) stator flux during the fault period, their effects on the rotor voltage can be investigated. It is concluded that the phase-to-phase fault has the worst scenario due to its highest introduction of the negative stator flux. Afterwards......, the capability of a 2 MW DFIG to ride through asymmetrical grid faults can be estimated at the existing design of the power electronics converter. Finally, a control scheme aimed to improve the DFIG capability is proposed and the simulation results validate its feasibility....

  13. Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.

    Science.gov (United States)

    Wang, Tao; Xie, Wenfang; Zhang, Youmin

    2012-05-01

    In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  14. From Colfiorito to L'Aquila Earthquake: learning from the past to communicating the risk of the present

    Science.gov (United States)

    Lanza, T.; Crescimbene, M.; La Longa, F.

    2012-04-01

    Italy is a country at risk of impending earthquake in the near future. Very probably, as it has already happened in the 13 years between the last two important seismic events (Colfiorito 1997- L'Aquila 2009), there won't be enough time to solve all the problems connected to seismic risk: first of all the corruption related to politics concerning buildings; the lack of the money necessary to strengthen the already existing ones, historical centres, monuments and the masterpieces of Art; the difficult relations of the Institutions with the traditional media (newspapers, radio and TV) and, at the same time, the new media (web); the difficulties for scientists to reach important results in the immediate future due to the lack of funding and, last but not least, to the conflicting relationships inside the scientific community itself. In this scenario, communication and education play a crucial role in minimizing the risk of the population. In the present work we reconsider the past with the intent of starting to trace a path for a future strategy of risk communication where everybody involved, included the population, should do his best in order to face the next emergency.

  15. The Evergreen basin and the role of the Silver Creek fault in the San Andreas fault system, San Francisco Bay region, California

    Science.gov (United States)

    Jachens, Robert C.; Wentworth, Carl M.; Graymer, Russell W.; Williams, Robert; Ponce, David A.; Mankinen, Edward A.; Stephenson, William J.; Langenheim, Victoria

    2017-01-01

    The Evergreen basin is a 40-km-long, 8-km-wide Cenozoic sedimentary basin that lies mostly concealed beneath the northeastern margin of the Santa Clara Valley near the south end of San Francisco Bay (California, USA). The basin is bounded on the northeast by the strike-slip Hayward fault and an approximately parallel subsurface fault that is structurally overlain by a set of west-verging reverse-oblique faults which form the present-day southeastward extension of the Hayward fault. It is bounded on the southwest by the Silver Creek fault, a largely dormant or abandoned fault that splays from the active southern Calaveras fault. We propose that the Evergreen basin formed as a strike-slip pull-apart basin in the right step from the Silver Creek fault to the Hayward fault during a time when the Silver Creek fault served as a segment of the main route by which slip was transferred from the central California San Andreas fault to the Hayward and other East Bay faults. The dimensions and shape of the Evergreen basin, together with palinspastic reconstructions of geologic and geophysical features surrounding it, suggest that during its lifetime, the Silver Creek fault transferred a significant portion of the ∼100 km of total offset accommodated by the Hayward fault, and of the 175 km of total San Andreas system offset thought to have been accommodated by the entire East Bay fault system. As shown previously, at ca. 1.5–2.5 Ma the Hayward-Calaveras connection changed from a right-step, releasing regime to a left-step, restraining regime, with the consequent effective abandonment of the Silver Creek fault. This reorganization was, perhaps, preceded by development of the previously proposed basin-bisecting Mount Misery fault, a fault that directly linked the southern end of the Hayward fault with the southern Calaveras fault during extinction of pull-apart activity. Historic seismicity indicates that slip below a depth of 5 km is mostly transferred from the Calaveras

  16. Paleoseismology of Sinistral-Slip Fault System, Focusing on the Mae Chan Fault, on the Shan Plateau, SE Asia.

    Science.gov (United States)

    Curtiss, E. R.; Weldon, R. J.; Wiwegwin, W.; Weldon, E. M.

    2017-12-01

    The Shan Plateau, which includes portions of Myanmar, China, Thailand, Laos, and Vietnam lies between the dextral NS-trending Sagaing and SE-trending Red River faults and contains 14 active E-W sinistral-slip faults, including the Mae Chan Fault (MCF) in northern Thailand. The last ground-rupturing earthquake to occur on the broader sinistral fault system was the M6.8 Tarlay earthquake in Myanmar in March 2011 on the Nam Ma fault immediately north of the MCF the last earthquake to occur on the MCF was a M4.0 in the 5th century that destroyed the entire city of Wiang Yonok (Morley et al., 2011). We report on a trenching study of the MCF, which is part of a broader study to create a regional seismic hazard map of the entire Shan Plateau. By studying the MCF, which appears to be representative of the sinistral faults, and easy to work on, we hope to characterize both it and the other unstudied faults in the system. As part of a paleoseismology training course we dug two trenches at the Pa Tueng site on the MCF, within an offset river channel and the trenches exposed young sediment with abundant charcoal (in process of dating), cultural artifacts, and evidence for the last two (or three) ground-rupturing earthquakes on the fault. We hope to use the data from this site to narrow the recurrence interval, which is currently to be 2,000-4,000 years and the slip rate of 1-2 mm/year, being developed at other sites on the fault. By extrapolating the data of the MCF to the other faults we will have a better understanding of the whole fault system. Once we have characterized the MCF, we plan to use geomorphic offsets and strain rates from regional GPS to relatively estimate the activity of the other faults in this sinistral system.

  17. Elemental Geochemistry of Samples From Fault Segments of the San Andreas Fault Observatory at Depth (SAFOD) Drill Hole

    Science.gov (United States)

    Tourscher, S. N.; Schleicher, A. M.; van der Pluijm, B. A.; Warr, L. N.

    2006-12-01

    Elemental geochemistry of mudrock samples from phase 2 drilling of the San Andreas Fault Observatory at Depth (SAFOD) is presented from bore hole depths of 3066 m to 3169 m and from 3292 m to 3368 m, which contain a creeping section and main trace of the fault, respectively. In addition to preparation and analysis of whole rock sample, fault grains with neomineralized, polished surfaces were hand picked from well-washed whole rock samples, minimizing the potential contamination from drilling mud and steel shavings. The separated fractions were washed in deionized water, powdered using a mortar and pestle, and analyzed using an Inductively Coupled Plasma- Optical Emission Spectrometer for major and minor elements. Based on oxide data results, systematic differences in element concentrations are observed between the whole rock and fault rock. Two groupings of data points are distinguishable in the regions containing the main trace of the fault, a shallow part (3292- 3316 m) and a deeper section (3320-3368 m). Applying the isocon method, assuming Zr and Ti to be immobile elements in these samples, indicates a volume loss of more than 30 percent in the shallow part and about 23 percent in the deep part of the main trace. These changes are minimum estimates of fault-related volume loss, because the whole rock from drilling samples contains variable amount of fault rock as well. Minimum estimates for volume loss in the creeping section of the fault are more than 50 percent when using the isocon method, comparing whole rock to plucked fault rock. The majority of the volume loss in the fault rocks is due to the dissolution and loss of silica, potassium, aluminum, sodium and calcium, whereas (based on oxide data) the mineralized surfaces of fractures appear to be enriched in Fe and Mg. The large amount of element mobility within these fault traces suggests extensive circulation of hydrous fluids along fractures that was responsible for progressive dissolution and leaching

  18. Fault Detection for Automotive Shock Absorber

    Science.gov (United States)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  19. Review of fault diagnosis and fault-tolerant control for modular multilevel converter of HVDC

    DEFF Research Database (Denmark)

    Liu, Hui; Loh, Poh Chiang; Blaabjerg, Frede

    2013-01-01

    This review focuses on faults in Modular Multilevel Converter (MMC) for use in high voltage direct current (HVDC) systems by analyzing the vulnerable spots and failure mechanism from device to system and illustrating the control & protection methods under failure condition. At the beginning......, several typical topologies of MMC-HVDC systems are presented. Then fault types such as capacitor voltage unbalance, unbalance between upper and lower arm voltage are analyzed and the corresponding fault detection and diagnosis approaches are explained. In addition, more attention is dedicated to control...

  20. Fault reactivation by fluid injection considering permeability evolution in fault-bordering damage zones

    Science.gov (United States)

    Yang, Z.; Yehya, A.; Rice, J. R.; Yin, J.

    2017-12-01

    Earthquakes can be induced by human activity involving fluid injection, e.g., as wastewater disposal from hydrocarbon production. The occurrence of such events is thought to be, mainly, due to the increase in pore pressure, which reduces the effective normal stress and hence the strength of a nearby fault. Change in subsurface stress around suitably oriented faults at near-critical stress states may also contribute. We focus on improving the modeling and prediction of the hydro-mechanical response due to fluid injection, considering the full poroelastic effects and not solely changes in pore pressure in a rigid host. Thus we address the changes in porosity and permeability of the medium due to the changes in the local volumetric strains. Our results also focus on including effects of the fault architecture (low permeability fault core and higher permeability bordering damage zones) on the pressure diffusion and the fault poroelastic response. Field studies of faults have provided a generally common description for the size of their bordering damage zones and how they evolve along their direction of propagation. Empirical laws, from a large number of such observations, describe their fracture density, width, permeability, etc. We use those laws and related data to construct our study cases. We show that the existence of high permeability damage zones facilitates pore-pressure diffusion and, in some cases, results in a sharp increase in pore-pressure at levels much deeper than the injection wells, because these regions act as conduits for fluid pressure changes. This eventually results in higher seismicity rates. By better understanding the mechanisms of nucleation of injection-induced seismicity, and better predicting the hydro-mechanical response of faults, we can assess methodologies and injection strategies to avoid risks of high magnitude seismic events. Microseismic events occurring after the start of injection are very important indications of when injection

  1. Geology of Maxwell Montes, Venus

    Science.gov (United States)

    Head, J. W.; Campbell, D. B.; Peterfreund, A. R.; Zisk, S. A.

    1984-01-01

    Maxwell Montes represent the most distinctive topography on the surface of Venus, rising some 11 km above mean planetary radius. The multiple data sets of the Pioneer missing and Earth based radar observations to characterize Maxwell Montes are analyzed. Maxwell Montes is a porkchop shaped feature located at the eastern end of Lakshmi Planum. The main massif trends about North 20 deg West for approximately 1000 km and the narrow handle extends several hundred km West South-West WSW from the north end of the main massif, descending down toward Lakshmi Planum. The main massif is rectilinear and approximately 500 km wide. The southern and northern edges of Maxwell Montes coincide with major topographic boundaries defining the edge of Ishtar Terra.

  2. On the "stacking fault" in copper

    NARCIS (Netherlands)

    Fransens, J.R.; Pleiter, F

    2003-01-01

    The results of a perturbed gamma-gamma angular correlations experiment on In-111 implanted into a properly cut single crystal of copper show that the defect known in the literature as "stacking fault" is not a planar faulted loop but a stacking fault tetrahedron with a size of 10-50 Angstrom.

  3. A Game Theoretic Fault Detection Filter

    Science.gov (United States)

    Chung, Walter H.; Speyer, Jason L.

    1995-01-01

    The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

  4. Effects of Fault Displacement on Emplacement Drifts

    International Nuclear Information System (INIS)

    Duan, F.

    2000-01-01

    The purpose of this analysis is to evaluate potential effects of fault displacement on emplacement drifts, including drip shields and waste packages emplaced in emplacement drifts. The output from this analysis not only provides data for the evaluation of long-term drift stability but also supports the Engineered Barrier System (EBS) process model report (PMR) and Disruptive Events Report currently under development. The primary scope of this analysis includes (1) examining fault displacement effects in terms of induced stresses and displacements in the rock mass surrounding an emplacement drift and (2 ) predicting fault displacement effects on the drip shield and waste package. The magnitude of the fault displacement analyzed in this analysis bounds the mean fault displacement corresponding to an annual frequency of exceedance of 10 -5 adopted for the preclosure period of the repository and also supports the postclosure performance assessment. This analysis is performed following the development plan prepared for analyzing effects of fault displacement on emplacement drifts (CRWMS M and O 2000). The analysis will begin with the identification and preparation of requirements, criteria, and inputs. A literature survey on accommodating fault displacements encountered in underground structures such as buried oil and gas pipelines will be conducted. For a given fault displacement, the least favorable scenario in term of the spatial relation of a fault to an emplacement drift is chosen, and the analysis is then performed analytically. Based on the analysis results, conclusions are made regarding the effects and consequences of fault displacement on emplacement drifts. Specifically, the analysis will discuss loads which can be induced by fault displacement on emplacement drifts, drip shield and/or waste packages during the time period of postclosure

  5. Fault zone structure and kinematics from lidar, radar, and imagery: revealing new details along the creeping San Andreas Fault

    Science.gov (United States)

    DeLong, S.; Donnellan, A.; Pickering, A.

    2017-12-01

    Aseismic fault creep, coseismic fault displacement, distributed deformation, and the relative contribution of each have important bearing on infrastructure resilience, risk reduction, and the study of earthquake physics. Furthermore, the impact of interseismic fault creep in rupture propagation scenarios, and its impact and consequently on fault segmentation and maximum earthquake magnitudes, is poorly resolved in current rupture forecast models. The creeping section of the San Andreas Fault (SAF) in Central California is an outstanding area for establishing methodology for future scientific response to damaging earthquakes and for characterizing the fine details of crustal deformation. Here, we describe how data from airborne and terrestrial laser scanning, airborne interferometric radar (UAVSAR), and optical data from satellites and UAVs can be used to characterize rates and map patterns of deformation within fault zones of varying complexity and geomorphic expression. We are evaluating laser point cloud processing, photogrammetric structure from motion, radar interferometry, sub-pixel correlation, and other techniques to characterize the relative ability of each to measure crustal deformation in two and three dimensions through time. We are collecting new and synthesizing existing data from the zone of highest interseismic creep rates along the SAF where a transition from a single main fault trace to a 1-km wide extensional stepover occurs. In the stepover region, creep measurements from alignment arrays 100 meters long across the main fault trace reveal lower rates than those in adjacent, geomorphically simpler parts of the fault. This indicates that deformation is distributed across the en echelon subsidiary faults, by creep and/or stick-slip behavior. Our objectives are to better understand how deformation is partitioned across a fault damage zone, how it is accommodated in the shallow subsurface, and to better characterize the relative amounts of fault creep

  6. Coordinated Fault Tolerance for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  7. A Wideband Magnetoresistive Sensor for Monitoring Dynamic Fault Slip in Laboratory Fault Friction Experiments.

    Science.gov (United States)

    Kilgore, Brian D

    2017-12-02

    A non-contact, wideband method of sensing dynamic fault slip in laboratory geophysical experiments employs an inexpensive magnetoresistive sensor, a small neodymium rare earth magnet, and user built application-specific wideband signal conditioning. The magnetoresistive sensor generates a voltage proportional to the changing angles of magnetic flux lines, generated by differential motion or rotation of the near-by magnet, through the sensor. The performance of an array of these sensors compares favorably to other conventional position sensing methods employed at multiple locations along a 2 m long × 0.4 m deep laboratory strike-slip fault. For these magnetoresistive sensors, the lack of resonance signals commonly encountered with cantilever-type position sensor mounting, the wide band response (DC to ≈ 100 kHz) that exceeds the capabilities of many traditional position sensors, and the small space required on the sample, make them attractive options for capturing high speed fault slip measurements in these laboratory experiments. An unanticipated observation of this study is the apparent sensitivity of this sensor to high frequency electomagnetic signals associated with fault rupture and (or) rupture propagation, which may offer new insights into the physics of earthquake faulting.

  8. Strong paleoearthquakes along the Talas-Fergana Fault, Kyrgyzstan

    Directory of Open Access Journals (Sweden)

    A.M. Korzhenkov

    2014-02-01

    Full Text Available The Talas-Fergana Fault, the largest strike-slip structure in Centred. Asia, forms an obliquely oriented boundary between the northeastern and southwestern parts of the Tianshan mountain belt. The fault underwent active right-lateral strike-slip during the Paleozoic, with right-lateral movements being rejuvenated in the Late Cenozoic. Tectonic movements along the intracontinental strike-slip faults contribute to absorb part of the regional crustal shortening linked to the India-Eurasia collision; knowledge of strike-slip motions along the Talas-Fergana Fault are necessary for a complete assessment of the active deformation of the Tianshan orogen. To improve our understanding of the intracontinental deformation of the Tianshan mountain belt and the occurrence of strong earthquakes along the whole length of the Talas-Fergana Fault, we identify features of relief arising during strong paleoearthquakes along the Talas-Fergana Fault, fault segmentation, the length of seismogenic ruptures, and the energy and age of ancient catastrophes. We show that during neotectonic time the fault developed as a dextral strike-slip fault, with possible dextral displacements spreading to secondary fault planes north of the main fault trace. We determine rates of Holocene and Late Pleistocene dextral movements, and our radiocarbon dating indicates tens of strong earthquakes occurring along the fault zone during arid interval of 15800 years. The reoccurrence of strong earthquakes along the Talas-Fergana Fault zone during the second half of the Holocene is about 300 years. The next strong earthquake along the fault will most probably occur along its southeastern chain during the next several decades. Seismotectonic deformation parameters indicate that M > 7 earthquakes with oscillation intensity I > IX have occurred.

  9. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    Science.gov (United States)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  10. Quaternary fault in Hwalseong-ri, Oedong-up, Gyeongju, Korea.

    Energy Technology Data Exchange (ETDEWEB)

    Ryoo, Chung-Ryul; Chwae, Uee-Chan; Choi, Sung-Ja [Korea Institute of Geoscience and Mineral Resources, Taejeon(Korea); Son, Moon [Pusan National University, Pusan(Korea)

    2001-09-01

    We describe a Quaternary fault occurring in Hwalseong-ri, Oedong-up, Gyeongju in the eastern part of Ulsan Fault Zone, Korea. This fault (Hwalseongri Fault) is developed around the contact between the early Tertiary granite and the Quaternary gravel layer. Four different faults are distinguished from west to east: (1) fault within Quaternary gravel layer, (2) fault between Quaternary gravel layer and granite, (3) fault between Quaternary gravel layer overlying granite and granite, (4) fault between granite and Quaternary layer. General strike of the fault zone vary from NNW to NE, dipping to east. Two striations, E-W and N-S, are developed. The former is related mainly to the reverse faulting, and the latter to the sinistral shearing. This fault zone was reactivated, and considered as a positive flower structure mainly by the results of the E-W compression in the southeastern part of the Korean Peninsula during Quaternary. (author). 45 refs., 6 figs.

  11. Application of fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, A.

    2007-11-30

    This report presents the results of a study commissioned by the Department for Business, Enterprise and Industry (BERR; formerly the Department of Trade and Industry) into the application of fault current limiters in the UK. The study reviewed the current state of fault current limiter (FCL) technology and regulatory position in relation to all types of current limiters. It identified significant research and development work with respect to medium voltage FCLs and a move to high voltage. Appropriate FCL technologies being developed include: solid state breakers; superconducting FCLs (including superconducting transformers); magnetic FCLs; and active network controllers. Commercialisation of these products depends on successful field tests and experience, plus material development in the case of high temperature superconducting FCL technologies. The report describes FCL techniques, the current state of FCL technologies, practical applications and future outlook for FCL technologies, distribution fault level analysis and an outline methodology for assessing the materiality of the fault level problem. A roadmap is presented that provides an 'action agenda' to advance the fault level issues associated with low carbon networks.

  12. Using marine magnetic survey data to identify a gold ore-controlling fault: a case study in Sanshandao fault, eastern China

    Science.gov (United States)

    Yan, Jiayong; Wang, Zhihui; Wang, Jinhui; Song, Jianhua

    2018-06-01

    The Jiaodong Peninsula has the greatest concentration of gold ore in China and is characterized by altered tectonite-type gold ore deposits. This type of gold deposit is mainly formed in fracture zones and is strictly controlled by faults. Three major ore-controlling faults occur in the Jiaodong Peninsula—the Jiaojia, Zhaoping and Sanshandao faults; the former two are located on land and the latter is located near Sanshandao and its adjacent offshore area. The discovery of the world’s largest marine gold deposit in northeastern Sanshandao indicates that the shallow offshore area has great potential for gold prospecting. However, as two ends of the Sanshandao fault extend to the Bohai Sea, conventional geological survey methods cannot determine the distribution of the fault and this is constraining the discovery of new gold deposits. To explore the southwestward extension of the Sanshandao fault, we performed a 1:25 000 scale marine magnetic survey in this region and obtained high-quality magnetic survey data covering 170 km2. Multi-scale edge detection and three-dimensional inversion of magnetic anomalies identify the characteristics of the southwestward extension of the Sanshandao fault and the three-dimensional distribution of the main lithologies, providing significant evidence for the deployment of marine gold deposit prospecting in the southern segment of the Sanshandao fault. Moreover, three other faults were identified in the study area and faults F2 and F4 are inferred as ore-controlling faults: there may exist other altered tectonite-type gold ore deposits along these two faults.

  13. FUZZY FAULT DETECTION FOR PERMANENT MAGNET SYNCHRONOUS GENERATOR

    Directory of Open Access Journals (Sweden)

    N. Selvaganesan

    2011-07-01

    Full Text Available Faults in engineering systems are difficult to avoid and may result in serious consequences. Effective fault detection and diagnosis can improve system reliability and avoid expensive maintenance. In this paper fuzzy system based fault detection scheme for permanent magnet synchronous generator is proposed. The sequence current components like positive and negative sequence currents are used as fault indicators and given as inputs to fuzzy fault detector. Also, the fuzzy inference system is created and rule base is evaluated, relating the sequence current component to the type of faults. These rules are fired for specific changes in sequence current component and the faults are detected. The feasibility of the proposed scheme for permanent magnet synchronous generator is demonstrated for different types of fault under various operating conditions using MATLAB/Simulink.

  14. A Novel Fault Identification Using WAMS/PMU

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2012-05-01

    Full Text Available The important premise of the novel adaptive backup protection based on wide area information is to identify the fault in a real-time and on-line way. In this paper, the principal components analysis theory is introduced into the field of fault detection to locate precisely the fault by mean of the voltage and current phasor data from the PMUs. Massive simulation experiments have fully proven that the fault identification can be performed successfully by principal component analysis and calculation. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Under the influence of noise, the results are still accurate and reliable. So, the principal components fault identification has strong anti-interference ability and great redundancy.

  15. Fault Length Vs Fault Displacement Evaluation In The Case Of Cerro Prieto Pull-Apart Basin (Baja California, Mexico) Subsidence

    Science.gov (United States)

    Glowacka, E.; Sarychikhina, O.; Nava Pichardo, F. A.; Farfan, F.; Garcia Arthur, M. A.; Orozco, L.; Brassea, J.

    2013-05-01

    The Cerro Prieto pull-apart basin is located in the southern part of San Andreas Fault system, and is characterized by high seismicity, recent volcanism, tectonic deformation and hydrothermal activity (Lomnitz et al, 1970; Elders et al., 1984; Suárez-Vidal et al., 2008). Since the Cerro Prieto geothermal field production started, in 1973, significant subsidence increase was observed (Glowacka and Nava, 1996, Glowacka et al., 1999), and a relation between fluid extraction rate and subsidence rate has been suggested (op. cit.). Analysis of existing deformation data (Glowacka et al., 1999, 2005, Sarychikhina 2011) points to the fact that, although the extraction changes influence the subsidence rate, the tectonic faults control the spatial extent of the observed subsidence. Tectonic faults act as water barriers in the direction perpendicular to the fault, and/or separate regions with different compaction, and as effect the significant part of the subsidence is released as vertical displacement on the ground surface along fault rupture. These faults ruptures cause damages to roads and irrigation canals and water leakage. Since 1996, a network of geotechnical instruments has operated in the Mexicali Valley, for continuous recording of deformation phenomena. To date, the network (REDECVAM: Mexicali Valley Crustal Strain Measurement Array) includes two crackmeters and eight tiltmeters installed on, or very close to, the main faults; all instruments have sampling intervals in the 1 to 20 minutes range. Additionally, there are benchmarks for measuring vertical fault displacements for which readings are recorded every 3 months. Since the crackmeter measures vertical displacement on the fault at one place only, the question appears: can we use the crackmeter data to evaluate how long is the lenth of the fractured fault, and how quickly it grows, so we can know where we can expect fractures in the canals or roads? We used the Wells and Coppersmith (1994) relations between

  16. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  17. Fault healing and earthquake spectra from stick slip sequences in the laboratory and on active faults

    Science.gov (United States)

    McLaskey, G. C.; Glaser, S. D.; Thomas, A.; Burgmann, R.

    2011-12-01

    Repeating earthquake sequences (RES) are thought to occur on isolated patches of a fault that fail in repeated stick-slip fashion. RES enable researchers to study the effect of variations in earthquake recurrence time and the relationship between fault healing and earthquake generation. Fault healing is thought to be the physical process responsible for the 'state' variable in widely used rate- and state-dependent friction equations. We analyze RES created in laboratory stick slip experiments on a direct shear apparatus instrumented with an array of very high frequency (1KHz - 1MHz) displacement sensors. Tests are conducted on the model material polymethylmethacrylate (PMMA). While frictional properties of this glassy polymer can be characterized with the rate- and state- dependent friction laws, the rate of healing in PMMA is higher than room temperature rock. Our experiments show that in addition to a modest increase in fault strength and stress drop with increasing healing time, there are distinct spectral changes in the recorded laboratory earthquakes. Using the impact of a tiny sphere on the surface of the test specimen as a known source calibration function, we are able to remove the instrument and apparatus response from recorded signals so that the source spectrum of the laboratory earthquakes can be accurately estimated. The rupture of a fault that was allowed to heal produces a laboratory earthquake with increased high frequency content compared to one produced by a fault which has had less time to heal. These laboratory results are supported by observations of RES on the Calaveras and San Andreas faults, which show similar spectral changes when recurrence time is perturbed by a nearby large earthquake. Healing is typically attributed to a creep-like relaxation of the material which causes the true area of contact of interacting asperity populations to increase with time in a quasi-logarithmic way. The increase in high frequency seismicity shown here

  18. 40 CFR 258.13 - Fault areas.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in Holocene...

  19. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  20. Noise Configuration and fault zone anisotropy investigation from Taiwan Chelungpu-fault Deep Borehole Array

    Science.gov (United States)

    Hung, R. J.; Ma, K. F.; Song, T. R. A.; Nishida, K.; Lin, Y. Y.

    2016-12-01

    The Taiwan Chelungpu-fault Drilling Project was operated to understand the fault zone characteristics associated with the 1999 Chichi earthquake. Seven Borehole Seismometers (TCDPBHS) were installed through the identified fault zone to monitor the micro-seismic activities, as well as the fault-zone seismic structure properties. To understand the fault zone anisotropy and its possible temporal variations after the Chichi earthquake, we calculated cross-correlations of the noise at different stations to obtain cross correlation functions (CCFs) of the ambient noise field between every pair of the stations. The result shows that TCDP well site suffers from complex wavefield, and phase traveltime from CCF can't provide explicit result to determine the dominated wavefield. We first analyze the power density spectra and probability density functions of this array. We observe that the spectra show diurnal variation in the frequency band 1-25 Hz, suggesting human-generated sources are dominated in this frequency band. Then, we focus on the particle motion analysis at each CCF. We assume one component at a station plays as a visual source and compute the CCF tensor in other station components. The particle motion traces show high linearity which indicate that the dominated wavefield in our study area is body wave signals with the azimuth approximate to 60° from north. We also analyze the Fourier spectral amplitudes by rotating every 5 degrees in time domain to search for the maximum background energy distribution. The result shows that the spectral amplitudes are stronger at NE-SW direction, with shallow incident angles which are comparable with the CCF particle motion measurement. In order to obtain higher resolution about the dominated wavefield in our study area, we also used beamforming from surface station array to validate our results from CCF analysis. In addition to the CCF analysis to provide the noise configuration at the TCDPBHS site for further analysis on

  1. Exact, almost and delayed fault detection

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Saberi, Ali; Stoorvogel, Anton A.

    1999-01-01

    Considers the problem of fault detection and isolation while using zero or almost zero threshold. A number of different fault detection and isolation problems using exact or almost exact disturbance decoupling are formulated. Solvability conditions are given for the formulated design problems....... The l-step delayed fault detection problem is also considered for discrete-time systems....

  2. Modeling Fluid Flow in Faulted Basins

    Directory of Open Access Journals (Sweden)

    Faille I.

    2014-07-01

    Full Text Available This paper presents a basin simulator designed to better take faults into account, either as conduits or as barriers to fluid flow. It computes hydrocarbon generation, fluid flow and heat transfer on the 4D (space and time geometry obtained by 3D volume restoration. Contrary to classical basin simulators, this calculator does not require a structured mesh based on vertical pillars nor a multi-block structure associated to the fault network. The mesh follows the sediments during the evolution of the basin. It deforms continuously with respect to time to account for sedimentation, erosion, compaction and kinematic displacements. The simulation domain is structured in layers, in order to handle properly the corresponding heterogeneities and to follow the sedimentation processes (thickening of the layers. In each layer, the mesh is unstructured: it may include several types of cells such as tetrahedra, hexahedra, pyramid, prism, etc. However, a mesh composed mainly of hexahedra is preferred as they are well suited to the layered structure of the basin. Faults are handled as internal boundaries across which the mesh is non-matching. Different models are proposed for fault behavior such as impervious fault, flow across fault or conductive fault. The calculator is based on a cell centered Finite Volume discretisation, which ensures conservation of physical quantities (mass of fluid, heat at a discrete level and which accounts properly for heterogeneities. The numerical scheme handles the non matching meshes and guaranties appropriate connection of cells across faults. Results on a synthetic basin demonstrate the capabilities of this new simulator.

  3. Pore-water evolution and solute-transport mechanisms in Opalinus Clay at Mont Terri and Mont Russelin (Canton Jura, Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Mazurek, M. [Institute of Geological Sciences, University of Berne, Berne (Switzerland); Haller de, A. [Earth and Environmental Sciences, University of Geneva, Geneva (Switzerland)

    2017-04-15

    Data pertinent to pore-water composition in Opalinus Clay in the Mont Terri and Mont Russelin anticlines have been collected over the last 20 years from long-term in situ pore-water sampling in dedicated boreholes, from laboratory analyses on drill cores and from the geochemical characteristics of vein infills. Together with independent knowledge on regional geology, an attempt is made here to constrain the geochemical evolution of the pore-waters. Following basin inversion and the establishment of continental conditions in the late Cretaceous, the Malm limestones acted as a fresh-water upper boundary leading to progressive out-diffusion of salinity from the originally marine pore-waters of the Jurassic low-permeability sequence. Model calculations suggest that at the end of the Palaeogene, pore-water salinity in Opalinus Clay was about half the original value. In the Chattian/Aquitanian, partial evaporation of sea-water occurred. It is postulated that brines diffused into the underlying sequence over a period of several Myr, resulting in an increase of salinity in Opalinus Clay to levels observed today. This hypothesis is further supported by the isotopic signatures of SO{sub 4}{sup 2-} and {sup 87}Sr/{sup 86}Sr in current pore-waters. These are not simple binary mixtures of sea and meteoric water, but their Cl{sup -} and stable water-isotope signatures can be potentially explained by a component of partially evaporated sea-water. After the re-establishment of fresh-water conditions on the surface and the formation of the Jura Fold and Thrust Belt, erosion caused the activation of aquifers embedding the low-permeability sequence, leading to the curved profiles of various pore-water tracers that are observed today. Fluid flow triggered by deformation events during thrusting and folding of the anticlines occurred and is documented by infrequent vein infills in major fault structures. However, this flow was spatially focussed and of limited duration and so did not

  4. Managing Space System Faults: Coalescing NASA's Views

    Science.gov (United States)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  5. Identifiability of Additive Actuator and Sensor Faults by State Augmentation

    Science.gov (United States)

    Joshi, Suresh; Gonzalez, Oscar R.; Upchurch, Jason M.

    2014-01-01

    A class of fault detection and identification (FDI) methods for bias-type actuator and sensor faults is explored in detail from the point of view of fault identifiability. The methods use state augmentation along with banks of Kalman-Bucy filters for fault detection, fault pattern determination, and fault value estimation. A complete characterization of conditions for identifiability of bias-type actuator faults, sensor faults, and simultaneous actuator and sensor faults is presented. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have unknown biases. The fault identifiability conditions are demonstrated via numerical examples. The analytical and numerical results indicate that caution must be exercised to ensure fault identifiability for different fault patterns when using such methods.

  6. Fossil landscapes and youthful seismogenic sources in the central Apennines: excerpts from the 24 August 2016, Amatrice earthquake and seismic hazard implications

    Directory of Open Access Journals (Sweden)

    Gianluca Valensise

    2016-11-01

    Full Text Available We show and discuss the similarities among the 2016 Amatrice (Mw 6.0, 1997 Colfiorito-Sellano (Mw 6.0-5.6 and 2009 L’Aquila (Mw 6.3 earthquakes. They all occurred along the crest of the central Apennines and were caused by shallow dipping faults between 3 and 10 km depth, as shown by their characteristic InSAR signature. We contend that these earthquakes delineate a seismogenic style that is characteristic of this portion of the central Apennines, where the upward propagation of seismogenic faults is hindered by the presence of pre-existing regional thrusts. This leads to an effective decoupling between the deeper seismogenic portion of the upper crust and its uppermost 3 km.The decoupling implies that active faults mapped at the surface do not connect with the seismogenic sources, and that their evolution may be controlled by passive readjustments to coseismic strains or even by purely gravitational motions. Seismic hazard analyses and estimates based on such faults should hence be considered with great caution as they may be all but representative of the true seismogenic potential.

  7. Uplift rates of marine terraces as a constraint on fault-propagation fold kinematics: Examples from the Hawkswood and Kate anticlines, North Canterbury, New Zealand

    Science.gov (United States)

    Oakley, David O. S.; Fisher, Donald M.; Gardner, Thomas W.; Stewart, Mary Kate

    2018-01-01

    Marine terraces on growing fault-propagation folds provide valuable insight into the relationship between fold kinematics and uplift rates, providing a means to distinguish among otherwise non-unique kinematic model solutions. Here, we investigate this relationship at two locations in North Canterbury, New Zealand: the Kate anticline and Haumuri Bluff, at the northern end of the Hawkswood anticline. At both locations, we calculate uplift rates of previously dated marine terraces, using DGPS surveys to estimate terrace inner edge elevations. We then use Markov chain Monte Carlo methods to fit fault-propagation fold kinematic models to structural geologic data, and we incorporate marine terrace uplift into the models as an additional constraint. At Haumuri Bluff, we find that marine terraces, when restored to originally horizontal surfaces, can help to eliminate certain trishear models that would fit the geologic data alone. At Kate anticline, we compare uplift rates at different structural positions and find that the spatial pattern of uplift rates is more consistent with trishear than with a parallel-fault propagation fold kink-band model. Finally, we use our model results to compute new estimates for fault slip rates ( 1-2 m/ka at Kate anticline and 1-4 m/ka at Haumuri Bluff) and ages of the folds ( 1 Ma), which are consistent with previous estimates for the onset of folding in this region. These results are consistent with previous work on the age of onset of folding in this region, provide revised estimates of fault slip rates necessary to understand the seismic hazard posed by these faults, and demonstrate the value of incorporating marine terraces in inverse fold kinematic models as a means to distinguish among non-unique solutions.

  8. A note on the effect of fault gouge thickness on fault stability

    Science.gov (United States)

    Byerlee, J.; Summers, R.

    1976-01-01

    At low confining pressure, sliding on saw cuts in granite is stable but at high pressure it is unstable. The pressure at which the transition takes place increases if the thickness of the crushed material between the sliding surfaces is increased. This experimental result suggests that on natural faults the stability of sliding may be affected by the width of the fault zone. ?? 1976.

  9. Fault Reconnaissance Agent for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Elhadi M. Shakshuki

    2010-01-01

    Full Text Available One of the key prerequisite for a scalable, effective and efficient sensor network is the utilization of low-cost, low-overhead and high-resilient fault-inference techniques. To this end, we propose an intelligent agent system with a problem solving capability to address the issue of fault inference in sensor network environments. The intelligent agent system is designed and implemented at base-station side. The core of the agent system – problem solver – implements a fault-detection inference engine which harnesses Expectation Maximization (EM algorithm to estimate fault probabilities of sensor nodes. To validate the correctness and effectiveness of the intelligent agent system, a set of experiments in a wireless sensor testbed are conducted. The experimental results show that our intelligent agent system is able to precisely estimate the fault probability of sensor nodes.

  10. The Fault Detection, Localization, and Tolerant Operation of Modular Multilevel Converters with an Insulated Gate Bipolar Transistor (IGBT Open Circuit Fault

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-04-01

    Full Text Available Reliability is one of the critical issues for a modular multilevel converter (MMC since it consists of a large number of series-connected power electronics submodules (SMs. In this paper, a complete control strategy including fault detection, localization, and tolerant operation is proposed for the MMC under an insulated gate bipolar transistor (IGBT open circuit fault. According to the output characteristics of the SM with the open-circuit fault of IGBT, a fault detection method based on the circulating current and output current observation is used. In order to further precisely locate the position of the faulty SM, a fault localization method based on the SM capacitor voltage observation is developed. After the faulty SM is isolated, the continuous operation of the converter is ensured by adopting the fault-tolerant strategy based on the use of redundant modules. To verify the proposed fault detection, fault localization, and fault-tolerant operation strategies, a 900 kVA MMC system under the conditions of an IGBT open circuit is developed in the Matlab/Simulink platform. The capabilities of rapid detection, precise positioning, and fault-tolerant operation of the investigated detection and control algorithms are also demonstrated.

  11. A fault detection and diagnosis in a PWR steam generator

    International Nuclear Information System (INIS)

    Park, Seung Yub

    1991-01-01

    The purpose of this study is to develop a fault detection and diagnosis scheme that can monitor process fault and instrument fault of a steam generator. The suggested scheme consists of a Kalman filter and two bias estimators. Method of detecting process and instrument fault in a steam generator uses the mean test on the residual sequence of Kalman filter, designed for the unfailed system, to make a fault decision. Once a fault is detected, two bias estimators are driven to estimate the fault and to discriminate process fault and instrument fault. In case of process fault, the fault diagnosis of outlet temperature, feed-water heater and main steam control valve is considered. In instrument fault, the fault diagnosis of steam generator's three instruments is considered. Computer simulation tests show that on-line prompt fault detection and diagnosis can be performed very successfully.(Author)

  12. Influence of fault asymmetric dislocation on the gravity changes

    Directory of Open Access Journals (Sweden)

    Duan Hurong

    2014-08-01

    Full Text Available A fault is a planar fracture or discontinuity in a volume of rock, across which there has been significant displacement along the fractures as a result of earth movement. Large faults within the Earth’s crust result from the action of plate tectonic forces, with the largest forming the boundaries between the plates, energy release associated with rapid movement on active faults is the cause of most earthquakes. The relationship between unevenness dislocation and gravity changes was studied on the theoretical thought of differential fault. Simulated observation values were adopted to deduce the gravity changes with the model of asymmetric fault and the model of Okada, respectively. The characteristic of unevennes fault momentum distribution is from two end points to middle by 0 according to a certain continuous functional increase. However, the fault momentum distribution in the fault length range is a constant when the Okada model is adopted. Numerical simulation experiments for the activities of the strike-slip fault, dip-slip fault and extension fault were carried out, respectively, to find that both the gravity contours and the gravity variation values are consistent when either of the two models is adopted. The apparent difference lies in that the values at the end points are 17. 97% for the strike-slip fault, 25. 58% for the dip-slip fault, and 24. 73% for the extension fault.

  13. Radial basis function neural network in fault detection of automotive ...

    African Journals Online (AJOL)

    Radial basis function neural network in fault detection of automotive engines. ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults ... Keywords: Automotive engine, independent RBFNN model, RBF neural network, fault detection

  14. Architecture of buried reverse fault zone in the sedimentary basin: A case study from the Hong-Che Fault Zone of the Junggar Basin

    Science.gov (United States)

    Liu, Yin; Wu, Kongyou; Wang, Xi; Liu, Bo; Guo, Jianxun; Du, Yannan

    2017-12-01

    It is widely accepted that the faults can act as the conduits or the barrier for oil and gas migration. Years of studies suggested that the internal architecture of a fault zone is complicated and composed of distinct components with different physical features, which can highly influence the migration of oil and gas along the fault. The field observation is the most useful methods of observing the fault zone architecture, however, in the petroleum exploration, what should be concerned is the buried faults in the sedimentary basin. Meanwhile, most of the studies put more attention on the strike-slip or normal faults, but the architecture of the reverse faults attracts less attention. In order to solve these questions, the Hong-Che Fault Zone in the northwest margin of the Junggar Basin, Xinjiang Province, is chosen for an example. Combining with the seismic data, well logs and drill core data, we put forward a comprehensive method to recognize the internal architectures of buried faults. High-precision seismic data reflect that the fault zone shows up as a disturbed seismic reflection belt. Four types of well logs, which are sensitive to the fractures, and a comprehensive discriminated parameter, named fault zone index are used in identifying the fault zone architecture. Drill core provides a direct way to identify different components of the fault zone, the fault core is composed of breccia, gouge, and serpentinized or foliated fault rocks and the damage zone develops multiphase of fractures, which are usually cemented. Based on the recognition results, we found that there is an obvious positive relationship between the width of the fault zone and the displacement, and the power-law relationship also exists between the width of the fault core and damage zone. The width of the damage zone in the hanging wall is not apparently larger than that in the footwall in the reverse fault, showing different characteristics with the normal fault. This study provides a

  15. Faults in Linux

    DEFF Research Database (Denmark)

    Palix, Nicolas Jean-Michel; Thomas, Gaël; Saha, Suman

    2011-01-01

    In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

  16. The pulsed migration of hydrocarbons across inactive faults

    Directory of Open Access Journals (Sweden)

    S. D. Harris

    1999-01-01

    Full Text Available Geological fault zones are usually assumed to influence hydrocarbon migration either as high permeability zones which allow enhanced along- or across-fault flow or as barriers to the flow. An additional important migration process inducing along- or across-fault migration can be associated with dynamic pressure gradients. Such pressure gradients can be created by earthquake activity and are suggested here to allow migration along or across inactive faults which 'feel' the quake-related pressure changes; i.e. the migration barriers can be removed on inactive faults when activity takes place on an adjacent fault. In other words, a seal is viewed as a temporary retardation barrier which leaks when a fault related fluid pressure event enhances the buoyancy force and allows the entry pressure to be exceeded. This is in contrast to the usual model where a seal leaks because an increase in hydrocarbon column height raises the buoyancy force above the entry pressure of the fault rock. Under the new model hydrocarbons may migrate across the inactive fault zone for some time period during the earthquake cycle. Numerical models of this process are presented to demonstrate the impact of this mechanism and its role in filling traps bounded by sealed faults.

  17. Fault Rupture Model of the 2016 Gyeongju, South Korea, Earthquake and Its Implication for the Underground Fault System

    Science.gov (United States)

    Uchide, Takahiko; Song, Seok Goo

    2018-03-01

    The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.

  18. Differential Fault Analysis on CLEFIA

    Science.gov (United States)

    Chen, Hua; Wu, Wenling; Feng, Dengguo

    CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

  19. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  20. Active fault traces along Bhuj Fault and Katrol Hill Fault, and ...

    Indian Academy of Sciences (India)

    face, passing through the alluvial-colluvial fan at location 2. The gentle warping of the surface was completely modified because of severe cultivation practice. Therefore, it was difficult to confirm it in field. To the south ... scarp has been modified by present day farming. At location 5 near Wandhay village, an active fault trace ...

  1. Hybrid SN/Monte Carlo research and results

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  2. Fault diagnosis of induction motors

    CERN Document Server

    Faiz, Jawad; Joksimović, Gojko

    2017-01-01

    This book is a comprehensive, structural approach to fault diagnosis strategy. The different fault types, signal processing techniques, and loss characterisation are addressed in the book. This is essential reading for work with induction motors for transportation and energy.

  3. Incorporating Fault Tolerance Tactics in Software Architecture Patterns

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris

    2008-01-01

    One important way that an architecture impacts fault tolerance is by making it easy or hard to implement measures that improve fault tolerance. Many such measures are described as fault tolerance tactics. We studied how various fault tolerance tactics can be implemented in the best-known

  4. Uncertainties related to the fault tree reliability data

    International Nuclear Information System (INIS)

    Apostol, Minodora; Nitoi, Mirela; Farcasiu, M.

    2003-01-01

    Uncertainty analyses related to the fault trees evaluate the system variability which appears from the uncertainties of the basic events probabilities. Having a logical model which describes a system, to obtain outcomes means to evaluate it, using estimations for each basic event of the model. If the model has basic events that incorporate uncertainties, then the results of the model should incorporate the uncertainties of the events. Uncertainties estimation in the final result of the fault tree means first the uncertainties evaluation for the basic event probabilities and then combination of these uncertainties, to calculate the top event uncertainty. To calculate the propagating uncertainty, a knowledge of the probability density function as well as the range of possible values of the basic event probabilities is required. The following data are defined, using suitable probability density function: the components failure rates; the human error probabilities; the initiating event frequencies. It was supposed that the possible value distribution of the basic event probabilities is given by the lognormal probability density function. To know the range of possible value of the basic event probabilities, the error factor or the uncertainty factor is required. The aim of this paper is to estimate the error factor for the failure rates and for the human errors probabilities from the reliability data base used in Cernavoda Probabilistic Safety Evaluation. The top event chosen as an example is FEED3, from the Pressure and Inventory Control System. The quantitative evaluation of this top event was made by using EDFT code, developed in Institute for Nuclear Research Pitesti (INR). It was supposed that the error factors for the component failures are the same as for the failure rates. Uncertainty analysis was made with INCERT application, which uses the moment method and Monte Carlo method. The reliability data base used at INR Pitesti does not contain the error factors (ef

  5. FAULT-TOLERANT DESIGN FOR ADVANCED DIVERSE PROTECTION SYSTEM

    Directory of Open Access Journals (Sweden)

    YANG GYUN OH

    2013-11-01

    Full Text Available For the improvement of APR1400 Diverse Protection System (DPS design, the Advanced DPS (ADPS has recently been developed to enhance the fault tolerance capability of the system. Major fault masking features of the ADPS compared with the APR1400 DPS are the changes to the channel configuration and reactor trip actuation equipment. To minimize the fault occurrences within the ADPS, and to mitigate the consequences of common-cause failures (CCF within the safety I&C systems, several fault avoidance design features have been applied in the ADPS. The fault avoidance design features include the changes to the system software classification, communication methods, equipment platform, MMI equipment, etc. In addition, the fault detection, location, containment, and recovery processes have been incorporated in the ADPS design. Therefore, it is expected that the ADPS can provide an enhanced fault tolerance capability against the possible faults within the system and its input/output equipment, and the CCF of safety systems.

  6. Radon anomalies along faults in North of Jordan

    International Nuclear Information System (INIS)

    Al-Tamimi, M.H.; Abumurad, K.M.

    2001-01-01

    Radon emanation was sampled in five locations in a limestone quarry area using SSNTDs CR-39. Radon levels in the soil air at four different well-known traceable fault planes were measured along a traverse line perpendicular to each of these faults. Radon levels at the fault were higher by a factor of 3-10 than away from the faults. However, some sites have broader shoulders than the others. The method was applied along a fifth inferred fault zone. The results show anomalous radon level in the sampled station near the fault zone, which gave a radon value higher by three times than background. This study draws its importance from the fact that in Jordan many cities and villages have been established over an intensive faulted land. Also, our study has considerable implications for the future radon mapping. Moreover, radon gas is proved to be a good tool for fault zones detection

  7. Ontology-Based Method for Fault Diagnosis of Loaders.

    Science.gov (United States)

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  8. Experiences of pathways, outcomes and choice after severe traumatic brain injury under no-fault versus fault-based motor accident insurance.

    Science.gov (United States)

    Harrington, Rosamund; Foster, Michele; Fleming, Jennifer

    2015-01-01

    To explore experiences of pathways, outcomes and choice after motor vehicle accident (MVA) acquired severe traumatic brain injury (sTBI) under fault-based vs no-fault motor accident insurance (MAI). In-depth qualitative interviews with 10 adults with sTBI and 17 family members examined experiences of pathways, outcomes and choice and how these were shaped by both compensable status and interactions with service providers and service funders under a no-fault and a fault-based MAI scheme. Participants were sampled to provide variation in compensable status, injury severity, time post-injury and metropolitan vs regional residency. Interviews were recorded, transcribed and thematically analysed to identify dominant themes under each scheme. Dominant themes emerging under the no-fault scheme included: (a) rehabilitation-focused pathways; (b) a sense of security; and (c) bounded choices. Dominant themes under the fault-based scheme included: (a) resource-rationed pathways; (b) pressured lives; and (c) unknown choices. Participants under the no-fault scheme experienced superior access to specialist rehabilitation services, greater surety of support and more choice over how rehabilitation and life-time care needs were met. This study provides valuable insights into individual experiences under fault-based vs no-fault MAI. Implications for an injury insurance scheme design to optimize pathways, outcomes and choice after sTBI are discussed.

  9. Fault Diagnosis and Fault-Tolerant Control of Wind Turbines via a Discrete Time Controller with a Disturbance Compensator

    Directory of Open Access Journals (Sweden)

    Yolanda Vidal

    2015-05-01

    Full Text Available This paper develops a fault diagnosis (FD and fault-tolerant control (FTC of pitch actuators in wind turbines. This is accomplished by combining a disturbance compensator with a controller, both of which are formulated in the discrete time domain. The disturbance compensator has a dual purpose: to estimate the actuator fault (which is used by the FD algorithm and to design the discrete time controller to obtain an FTC. That is, the pitch actuator faults are estimated, and then, the pitch control laws are appropriately modified to achieve an FTC with a comparable behavior to the fault-free case. The performance of the FD and FTC schemes is tested in simulations with the aero-elastic code FAST.

  10. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  11. FAULT TOLERANCE IN JOB SCHEDULING THROUGH FAULT MANAGEMENT FRAMEWORK USING SOA IN GRID

    Directory of Open Access Journals (Sweden)

    V. Indhumathi

    2017-01-01

    Full Text Available The rapid development in computing resources has enhanced the recital of computers and abridged their costs. This accessibility of low cost prevailing computers joined with the fame of the Internet and high-speed networks has leaded the computing surroundings to be mapped from dispersed to grid environments. Grid is a kind of dispersed system which supports the allotment and harmonized exploit of geographically dispersed and multi-owner resources, autonomously from their physical form and site, in vibrant practical organizations that carve up the similar objective of decipher large-scale applications. Thus any type of failure can happen at any point of time and job running in grid environment might fail. Therefore fault tolerance is an imperative and demanding concern in grid computing as the steadiness of individual grid resources may not be guaranteed. In order to build computational grids more effectual and consistent fault tolerant system is required. In order to accomplish the user prospect in terms of recital and competence, the Grid system desires SOA Fault Management Framework for the sharing of tasks with fault tolerance. A Fault Management Framework endeavor to pick up the response time of user’s proposed applications by ensures maximal exploitation of obtainable resources. The main aim is to avert, if probable, the stipulation where some processors are congested by means of a set of tasks while others are flippantly loaded or even at leisure.

  12. Integrating cyber attacks within fault trees

    International Nuclear Information System (INIS)

    Nai Fovino, Igor; Masera, Marcelo; De Cian, Alessio

    2009-01-01

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  13. Integrating cyber attacks within fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Nai Fovino, Igor [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy)], E-mail: igor.nai@jrc.it; Masera, Marcelo [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy); De Cian, Alessio [Department of Electrical Engineering, University di Genova, Genoa (Italy)

    2009-09-15

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  14. Fault Management Techniques in Human Spaceflight Operations

    Science.gov (United States)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  15. Fault management and systems knowledge

    Science.gov (United States)

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  16. Fault Tolerant Position-mooring Control for Offshore Vessels

    DEFF Research Database (Denmark)

    Blanke, Mogens; Nguyen, Trong Dong

    2018-01-01

    Fault-tolerance is crucial to maintain safety in offshore operations. The objective of this paper is to show how systematic analysis and design of fault-tolerance is conducted for a complex automation system, exemplified by thruster assisted Position-mooring. Using redundancy as required....... Functional faults that are only detectable, are rendered isolable through an active isolation approach. Once functional faults are isolated, they are handled by fault accommodation techniques to meet overall control objectives specified by class requirements. The paper illustrates the generic methodology...... by a system to handle faults in mooring lines, sensors or thrusters. Simulations and model basin experiments are carried out to validate the concept for scenarios with single or multiple faults. The results demonstrate that enhanced availability and safety are obtainable with this design approach. While...

  17. Absolute age determination of quaternary fault and formation

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Chang Sik; Lee, Kwang Sik; Choi, Man Sik [Korea Basic Science Institute, Seoul (Korea, Republic of)] (and others)

    2002-04-15

    The annual ('01-'01) objective of this project is to data the fault activity for the presumed quaternary fault zones to the western part of the Ulsam fault system and southeastern coastal area near the Wolseong Nuclear Power Plant. Rb-Sr, K-Ar, OSL, C-14 and U-series disequilibrium methods were applied to the fault rocks, organic matter and quaternary formations collected from the Pyeonghae, Bogyeongsa, Yugyeri, Byegkye, Gacheon-1 and Joil outcrops of the Yangsan fault system, the Baenaegol outcrop of the Moryang fault system, the Susyongji(Madong-2), Singye, Hwalseongri, Ipsil and Wonwonsa outcrops of the Ulsan fault system and from quaternary marine terraces (Oryoo and Kwangseong sites) in the southeastern coastal area. The experimental procedure of the OSL SAR protocol was reexamined to get more reliable dating results.

  18. Industrial Cost-Benefit Assessment for Fault-tolerant Control Systems

    DEFF Research Database (Denmark)

    Thybo, Claus; Blanke, Mogens

    1998-01-01

    Economic aspects are decisive for industrial acceptance of research concepts including the promising ideas in fault tolerant control. Fault tolerance is the ability of a system to detect, isolate and accommodate a fault, such that simple faults in a sub-system do not develop into failures...... at a system level. In a design phase for an industrial system, possibilities span from fail safe design where any single point failure is accommodated by hardware, over fault-tolerant design where selected faults are handled without extra hardware, to fault-ignorant design where no extra precaution is taken...

  19. Fethiye-Burdur Fault Zone (SW Turkey): a myth?

    Science.gov (United States)

    Kaymakci, Nuretdin; Langereis, Cornelis; Özkaptan, Murat; Özacar, Arda A.; Gülyüz, Erhan; Uzel, Bora; Sözbilir, Hasan

    2017-04-01

    Fethiye Burdur Fault Zone (FBFZ) is first proposed by Dumont et al. (1979) as a sinistral strike-slip fault zone as the NE continuation of Pliny-Strabo trench in to the Anatolian Block. The fault zone supposed to accommodate at least 100 km sinistral displacement between the Menderes Massif and the Beydaǧları platform during the exhumation of the Menderes Massif, mainly during the late Miocene. Based on GPS velocities Barka and Reilinger (1997) proposed that the fault zone is still active and accommodates sinistral displacement. In order to test the presence and to unravel its kinematics we have conducted a rigorous paleomagnetic study containing more than 3000 paleomagnetic samples collected from 88 locations and 11700 fault slip data collected from 198 locations distributed evenly all over SW Anatolia spanning from Middle Miocene to Late Pliocene. The obtained rotation senses and amounts indicate slight (around 20°) counter-clockwise rotations distributed uniformly almost whole SW Anatolia and there is no change in the rotation senses and amounts on either side of the FBFZ implying no differential rotation within the zone. Additionally, the slickenside pitches and constructed paleostress configurations, along the so called FBFZ and also within the 300 km diameter of the proposed fault zone, indicated that almost all the faults, oriented parallel to subparallel to the zone, are normal in character. The fault slip measurements are also consistent with earthquake focal mechanisms suggesting active extension in the region. We have not encountered any significant strike-slip motion in the region to support presence and transcurrent nature of the FBFZ. On the contrary, the region is dominated by extensional deformation and strike-slip components are observed only on the NW-SE striking faults which are transfer faults that accommodated extension and normal motion. Therefore, we claim that the sinistral Fethiye Burdur Fault (Zone) is a myth and there is no tangible

  20. Observer-Based Fault Estimation and Accomodation for Dynamic Systems

    CERN Document Server

    Zhang, Ke; Shi, Peng

    2013-01-01

    Due to the increasing security and reliability demand of actual industrial process control systems, the study on fault diagnosis and fault tolerant control of dynamic systems has received considerable attention. Fault accommodation (FA) is one of effective methods that can be used to enhance system stability and reliability, so it has been widely and in-depth investigated and become a hot topic in recent years. Fault detection is used to monitor whether a fault occurs, which is the first step in FA. On the basis of fault detection, fault estimation (FE) is utilized to determine online the magnitude of the fault, which is a very important step because the additional controller is designed using the fault estimate. Compared with fault detection, the design difficulties of FE would increase a lot, so research on FE and accommodation is very challenging. Although there have been advancements reported on FE and accommodation for dynamic systems, the common methods at the present stage have design difficulties, whi...

  1. Effect of carbonate content on the mechanical behaviour of clay fault-gouges

    Science.gov (United States)

    Bakker, Elisenda; Niemeijer, André; Hangx, Suzanne; Spiers, Chris

    2015-04-01

    Carbon dioxide capture and storage (CCS) in depleted oil and gas reservoirs is considered to be the most promising technology to achieve large-scale reduction in anthropogenic emissions. In order to retain the stored CO2 from the atmosphere for the very long-term, i.e. on timescales of the order of 103-104 years, it is essential to maintain the integrity of the caprock, and more specifically of any faults penetrating the seal. When selecting suitable CO2-storage reservoirs, pre-exisiting faults within the caprock require close attention, as changes in the stress state resulting from CO2-injection may induce fault slip motion which might cause leakage. Little is known about the effect of fluid-rock interactions on the mineral composition, mechanical properties and the integrity and sealing capacity of the caprock. Previous studies on the effect of mineral composition on the frictional properties of fault gouges have shown that friction is controlled by the dominant phase unless there is a frictionally weak, through-going fabric. However, the effect on stability is less clear. Since long-term CO2-exposure might cause chemical reactions, potentially resulting in the dissolution or precipitation of carbonate minerals, a change in mineralogy could affect the mechanical stability of a caprock significantly. Calcite, for example, is known to be prone to micro-seismicity and shows a transition from velocity-strengthening to velocity-weakening behaviour around 100-150°C. Therefore, we investigated the effect of varying clay:carbonate ratios on fault friction behaviour, fault reactivation potential and slip stability, i.e. seismic vs. aseismic behaviour. Three types of simulated fault gouges were used: i) carbonate-free, natural clay-rich caprock samples, consisting of predominantly phyllosilicates (~80%) and quartz ~20%), ii) pure calcite, and iii) mixtures of carbonate-free clay-rich caprock and pure calcite, with predetermined clay:carbonate ratios. For the natural clay

  2. Discovery of amorphous carbon veins in the 2008 Wenchuan earthquake fault zone: implications for the fault weakening mechanism

    Science.gov (United States)

    Liu, J.; Zhang, J.; Zhang, B.; Li, H.

    2013-12-01

    The 2008 Wenchuan earthquake generated 270- and 80-km-long surface ruptures along Yingxiu-Beichuan fault and Guanxian-Anxian fault, respectively. At the outcrop near Hongkou village, southwest segment of Yingxiu-Beichuan rupture, network black amorphous carbon veins were discovered near fault planes in the 190-m-wide earthquake fault zone. These veins are mainly composed of ultrafine- and fine-grained amorphous carbon, usually narrower than 5mm and injected into faults and cracks as far as several meter. Flowage structures like asymmetrical structures around few stiff rock fragments indicate materials flew when the veins formed. Fluidization of cataclastic amorphous carbon and the powerful driving force in the veins imply high pore pressure built up during earthquakes. High pore pressure solution and graphite reported in the fault gouge (Togo et al., 2011) can lead very low dynamic friction during the Wenchuan earthquake. This deduction hypothesis is in accordance with the very low thermal abnormal measured on the principle fault zone following the Wenchuan earthquake (Mori et al., 2010). Furthermore, network amorphous carbon veins of different generations suggest similar weakening mechanism also worked on historical earthquakes in Longmenshan fault zone. Reference: Brodsky, E. E., Li, H., Mori, J. J., Kano, Y., and Xue, L., 2012, Frictional Stress Measured Through Temperature Profiles in the Wenchuan Scientific Fault Zone Drilling Project. American Geophysical Union, Fall Meeting. San Francisco, T44B-07 Li, H., Xu, Z., Si, J., Pei, J., Song, S., Sun, Z., and Chevalier, M., 2012, Wenchuan Earthquake Fault Scientific Drilling program (WFSD): Overview and Results. American Geophysical Union, Fall Meeting. San Francisco, T44B-01 Mori, J. J., Li, H., Wang, H., Kano, Y., Pei, J., Xu, Z., and Brodsky, E. E., 2010, Temperature measurements in the WFSD-1 borehole following the 2008 Wenchuan earthquake (MW7.9). American Geophysical Union, Fall Meeting. San Francisco, T53E

  3. TREDRA, Minimal Cut Sets Fault Tree Plot Program

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: TREDRA is a computer program for drafting report-quality fault trees. The input to TREDRA is similar to input for standard computer programs that find minimal cut sets from fault trees. Output includes fault tree plots containing all standard fault tree logic and event symbols, gate and event labels, and an output description for each event in the fault tree. TREDRA contains the following features: a variety of program options that allow flexibility in the program output; capability for automatic pagination of the output fault tree, when necessary; input groups which allow labeling of gates, events, and their output descriptions; a symbol library which includes standard fault tree symbols plus several less frequently used symbols; user control of character size and overall plot size; and extensive input error checking and diagnostic oriented output. 2 - Method of solution: Fault trees are generated by user-supplied control parameters and a coded description of the fault tree structure consisting of the name of each gate, the gate type, the number of inputs to the gate, and the names of these inputs. 3 - Restrictions on the complexity of the problem: TREDRA can produce fault trees with a minimum of 3 and a maximum of 56 levels. The width of each level may range from 3 to 37. A total of 50 transfers is allowed during pagination

  4. Degree of Fault Tolerance as a Comprehensive Parameter for Reliability Evaluation of Fault Tolerant Electric Traction Drives

    Directory of Open Access Journals (Sweden)

    Igor Bolvashenkov

    2016-09-01

    Full Text Available This paper describes a new approach and methodology of quantitative assessment of the fault tolerance of electric power drive consisting of the multi-phase traction electric motor and multilevel electric inverter. It is suggested to consider such traction drive as a system with several degraded states. As a comprehensive parameter for evaluating of the fault tolerance, it is proposed to use the criterion of degree of the fault tolerance. For the approbation of the proposed method, the authors carried out research and obtained results of its practical application for evaluating the fault tolerance of the power train of an electrical helicopter.

  5. Fault Ride Through Capability Enhancement of a Large-Scale PMSG Wind System with Bridge Type Fault Current Limiters

    Directory of Open Access Journals (Sweden)

    ALAM, M. S.

    2018-02-01

    Full Text Available In this paper, bridge type fault current limiter (BFCL is proposed as a potential solution to the fault problems of permanent magnet synchronous generator (PMSG based large-scale wind energy system. As PMSG wind system is more vulnerable to disturbances, it is essential to guarantee the stability during severe disturbances by enhancing the fault ride through capability. BFCL controller has been designed to insert resistance and inductance during the inception of system disturbances in order to limit fault current. Constant capacitor voltage has been maintained by the grid voltage source converter (GVSC controller while current extraction or injection has been achieved by machine VSC (MVSC controller. Symmetrical and unsymmetrical faults have been applied in the system to show the effectiveness of the proposed BFCL solution. PMSG wind system, BFCL and their controllers have been implemented by real time hardware in loop (RTHIL setup with real time digital simulator (RTDS and dSPACE. Another significant feature of this work is that the performance of the proposed BFCL is compared with that of series dynamic braking resistor (SDBR. Comparative RTHIL implementation results show that the proposed BFCL is very efficient in improving system fault ride through capability by limiting the fault current and outperforms SDBR.

  6. Concepts and Methods in Fault-tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Staroswiecly, M.; Wu, N.E.

    2001-01-01

    Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to technical parts of the plant, to personnel or the environment. Fault-tolerant control combines diagnosis with control methods to handle faults...

  7. Automatic Fault Characterization via Abnormality-Enhanced Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    2010-12-20

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help to identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.

  8. Gently dipping normal faults identified with Space Shuttle radar topography data in central Sulawesi, Indonesia, and some implications for fault mechanics

    Science.gov (United States)

    Spencer, J.E.

    2011-01-01

    Space-shuttle radar topography data from central Sulawesi, Indonesia, reveal two corrugated, domal landforms, covering hundreds to thousands of square kilometers, that are bounded to the north by an abrupt transition to typical hilly to mountainous topography. These domal landforms are readily interpreted as metamorphic core complexes, an interpretation consistent with a single previous field study, and the abrupt northward transition in topographic style is interpreted as marking the trace of two extensional detachment faults that are active or were recently active. Fault dip, as determined by the slope of exhumed fault footwalls, ranges from 4?? to 18??. Application of critical-taper theory to fault dip and hanging-wall surface slope, and to similar data from several other active or recently active core complexes, suggests a theoretical limit of three degrees for detachment-fault dip. This result appears to conflict with the dearth of seismological evidence for slip on faults dipping less than ~. 30??. The convex-upward form of the gently dipping fault footwalls, however, allows for greater fault dip at depths of earthquake initiation and dominant energy release. Thus, there may be no conflict between seismological and mapping studies for this class of faults. ?? 2011 Elsevier B.V.

  9. Self-triggering superconducting fault current limiter

    Science.gov (United States)

    Yuan, Xing [Albany, NY; Tekletsadik, Kasegn [Rexford, NY

    2008-10-21

    A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

  10. Collection and analysis of existing information on applicability of investigation methods for estimation of beginning age of faulting in present faulting pattern

    International Nuclear Information System (INIS)

    Doke, Ryosuke; Yasue, Ken-ichi; Tanikawa, Shin-ichi; Nakayasu, Akio; Niizato, Tadafumi; Tanaka, Takenobu; Aoki, Michinori; Sekiya, Ayako

    2011-12-01

    In the field of R and D programs of a geological disposal of high level radioactive waste, it is great importance to develop a set of investigation and analysis techniques for the assessment of long-term geosphere stability over a geological time, which means that any changes of geological environment will not significantly impact on the long-term safety of a geological disposal system. In Japanese archipelago, crustal movements are so active that uplift and subsidence are remarkable in recent several hundreds of thousands of years. Therefore, it is necessary to assess the long-term geosphere stability taking into account a topographic change caused by crustal movements. One of the factors for the topographic change is the movement of an active fault, which is a geological process to release a strain accumulated by plate motion. A beginning age of the faulting in the present faulting pattern suggests the beginning age of neotectonic activities around the active fault, and also provides basic information to identifying the stage of a geomorphic development of mountains. Therefore, the age of faulting in the present faulting pattern is important information to estimate a topographic change in the future on the mountain regions of Japan. In this study, existing information related to methods for the estimation of the beginning age of the faulting in the present faulting pattern on the active fault were collected and reviewed. A principle of method, noticing points and technical know-hows in the application of the methods, data uncertainty, and so on were extracted from the existing information. Based on these extracted information, task-flows indicating working process on the estimation of the beginning age for the faulting of the active fault were illustrated on each method. Additionally, the distribution map of the beginning age with accuracy of faulting in the present faulting pattern on the active fault was illustrated. (author)

  11. Data Fault Detection in Medical Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2015-03-01

    Full Text Available Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians’ diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren’t changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M. Its mechanism includes: (1 use of a dynamic-local outlier factor (D-LOF algorithm to identify outlying sensed data vectors; (2 use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3 the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M.

  12. Fault-tolerant rotary actuator

    Science.gov (United States)

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  13. Plate rotations, fault slip rates, fault locking, and distributed deformation in northern Central America from 1999-2017 GPS observations

    Science.gov (United States)

    Ellis, A. P.; DeMets, C.; Briole, P.; Cosenza, B.; Flores, O.; Guzman-Speziale, M.; Hernandez, D.; Kostoglodov, V.; La Femina, P. C.; Lord, N. E.; Lasserre, C.; Lyon-Caen, H.; McCaffrey, R.; Molina, E.; Rodriguez, M.; Staller, A.; Rogers, R.

    2017-12-01

    We describe plate rotations, fault slip rates, and fault locking estimated from a new 100-station GPS velocity field at the western end of the Caribbean plate, where the Motagua-Polochic fault zone, Middle America trench, and Central America volcanic arc faults converge. In northern Central America, fifty-one upper-plate earthquakes caused approximately 40,000 fatalities since 1900. The proximity of main population centers to these destructive earthquakes and the resulting loss of human life provide strong motivation for studying the present-day tectonics of Central America. Plate rotations, fault slip rates, and deformation are quantified via a two-stage inversion of daily GPS position time series using TDEFNODE modeling software. In the first stage, transient deformation associated with three M>7 earthquakes in 2009 and 2012 is estimated and removed from the GPS position time series. In Stage 2, linear velocities determined from the corrected GPS time series are inverted to estimate deformation within the western Caribbean plate, slip rates along the Motagua-Polochic faults and faults in the Central America volcanic arc, and the gradient of extension in the Honduras-Guatemala wedge. Major outcomes of the second inversion include the following: (1) Confirmation that slip rates on the Motagua fault decrease from 17-18 mm/yr at its eastern end to 0-5 mm/yr at its western end, in accord with previous results. (2) A transition from moderate subduction zone locking offshore from southern Mexico and parts of southern Guatemala to weak or zero coupling offshore from El Salvador and parts of Nicaragua along the Middle America trench. (3) Evidence for significant east-west extension in southern Guatemala between the Motagua fault and volcanic arc. Our study also shows evidence for creep on the eastern Motagua fault that diminishes westward along the North America-Caribbean plate boundary.

  14. Investigation of the Meers fault in southwestern Oklahoma

    International Nuclear Information System (INIS)

    Luza, K.V.; Madole, R.F.; Crone, A.J.

    1987-08-01

    The Meers fault is part of a major system of NW-trending faults that form the boundary between the Wichita Mountains and the Anadarko basin in southwestern Oklahoma. A portion of the Meers fault is exposed at the surface in northern Comanche County and strikes approximately N. 60 0 W. where it offsets Permian conglomerate and shale for at least 26 km. The scarp on the fault is consistently down to the south, with a maximum relief of 5 m near the center of the fault trace. Quaternary stratigraphic relationships and 10 14 C age dates constrain the age of the last movement of the Meers fault. The last movement postdates the Browns Creek Alluvium, late Pleistocene to early Holocene, and predates the East Cache Alluvium, 100 to 800 yr B.P. Fan alluvium, produced by the last fault movement, buried a soil that dates between 1400 and 1100 yr B.P. Two trenches excavated across the scarp near Canyon Creek document the near-surface deformation and provide some general information on recurrence. Trench 1 was excavated in the lower Holocene part of the Browns Creek Alluvium, and trench 2 was excavated in unnamed gravels thought to be upper Pleistocene. Flexing and warping was the dominant mode of deformation that produced the scarp. The stratigraphy in both trenches indicates one surface-faulting event, which implies a lengthy recurrence interval for surface faulting on this part of the fault. Organic-rich material from two samples that postdate the last fault movement yielded 14 C ages between 1600 and 1300 yr B.P. These dates are in excellent agreement with the dates obtained from soils buried by the fault-related fan alluvium

  15. Research on the Fault Coefficient in Complex Electrical Engineering

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2015-08-01

    Full Text Available Fault detection and isolation in a complex system are research hotspots and frontier problems in the reliability engineering field. Fault identification can be regarded as a procedure of excavating key characteristics from massive failure data, then classifying and identifying fault samples. In this paper, based on the fundamental of feature extraction about the fault coefficient, we will discuss the fault coefficient feature in complex electrical engineering in detail. For general fault types in a complex power system, even if there is a strong white Gaussian stochastic interference, the fault coefficient feature is still accurate and reliable. The results about comparative analysis of noise influence will also demonstrate the strong anti-interference ability and great redundancy of the fault coefficient feature in complex electrical engineering.

  16. Mine-hoist active fault tolerant control system and strategy

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Z.; Wang, Y.; Meng, J.; Zhao, P.; Chang, Y. [China University of Mining and Technology, Xuzhou (China)] wzjsdstu@163.com

    2005-06-01

    Based on fault diagnosis and fault tolerant technologies, the mine-hoist active fault-tolerant control system (MAFCS) is presented with corresponding strategies, which includes the fault diagnosis module (FDM), the dynamic library (DL) and the fault-tolerant control model (FCM). When a fault is judged from some sensor by the FDM, FCM reconfigures the state of the MAFCS by calling the parameters from all sub libraries in DL, in order to ensure the reliability and safety of the mine hoist. The simulating result shows that MAFCS is of certain intelligence, which can adopt the corresponding control strategies according to different fault modes, even when there is quite a difference between the real data and the prior fault modes. 7 refs., 5 figs., 1 tab.

  17. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    Science.gov (United States)

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  18. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    A. H. Al-Mohammed

    2014-01-01

    Full Text Available This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs, when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research.

  19. Multi-Fault Rupture Scenarios in the Brawley Seismic Zone

    Science.gov (United States)

    Kyriakopoulos, C.; Oglesby, D. D.; Rockwell, T. K.; Meltzner, A. J.; Barall, M.

    2017-12-01

    Dynamic rupture complexity is strongly affected by both the geometric configuration of a network of faults and pre-stress conditions. Between those two, the geometric configuration is more likely to be anticipated prior to an event. An important factor in the unpredictability of the final rupture pattern of a group of faults is the time-dependent interaction between them. Dynamic rupture models provide a means to investigate this otherwise inscrutable processes. The Brawley Seismic Zone in Southern California is an area in which this approach might be important for inferring potential earthquake sizes and rupture patterns. Dynamic modeling can illuminate how the main faults in this area, the Southern San Andreas (SSAF) and Imperial faults, might interact with the intersecting cross faults, and how the cross faults may modulate rupture on the main faults. We perform 3D finite element modeling of potential earthquakes in this zone assuming an extended array of faults (Figure). Our results include a wide range of ruptures and fault behaviors depending on assumptions about nucleation location, geometric setup, pre-stress conditions, and locking depth. For example, in the majority of our models the cross faults do not strongly participate in the rupture process, giving the impression that they are not typically an aid or an obstacle to the rupture propagation. However, in some cases, particularly when rupture proceeds slowly on the main faults, the cross faults indeed can participate with significant slip, and can even cause rupture termination on one of the main faults. Furthermore, in a complex network of faults we should not preclude the possibility of a large event nucleating on a smaller fault (e.g. a cross fault) and eventually promoting rupture on the main structure. Recent examples include the 2010 Mw 7.1 Darfield (New Zealand) and Mw 7.2 El Mayor-Cucapah (Mexico) earthquakes, where rupture started on a smaller adjacent segment and later cascaded into a larger

  20. Fault geometry, rupture dynamics and ground motion from potential earthquakes on the North Anatolian Fault under the Sea of Marmara

    KAUST Repository

    Oglesby, David D.

    2012-03-01

    Using the 3-D finite-element method, we develop dynamic spontaneous rupture models of earthquakes on the North Anatolian Fault system in the Sea of Marmara, Turkey, considering the geometrical complexity of the fault system in this region. We find that the earthquake size, rupture propagation pattern and ground motion all strongly depend on the interplay between the initial (static) regional pre-stress field and the dynamic stress field radiated by the propagating rupture. By testing several nucleation locations, we observe that those far from an oblique normal fault stepover segment (near Istanbul) lead to large through-going rupture on the entire fault system, whereas nucleation locations closer to the stepover segment tend to produce ruptures that die out in the stepover. However, this pattern can change drastically with only a 10° rotation of the regional stress field. Our simulations also reveal that while dynamic unclamping near fault bends can produce a new mode of supershear rupture propagation, this unclamping has a much smaller effect on the speed of the peak in slip velocity along the fault. Finally, we find that the complex fault geometry leads to a very complex and asymmetric pattern of near-fault ground motion, including greatly amplified ground motion on the insides of fault bends. The ground-motion pattern can change significantly with different hypocentres, even beyond the typical effects of directivity. The results of this study may have implications for seismic hazard in this region, for the dynamics and ground motion of geometrically complex faults, and for the interpretation of kinematic inverse rupture models.