WorldWideScience

Sample records for monte aquila fault

  1. Aquila

    Science.gov (United States)

    Murdin, P.

    2000-11-01

    (the Eagle; abbrev. Aql, gen. Aquilae; area 652 sq. deg.) An equatorial constellation that lies between Sagitta and Sagittarius, and culminates at midnight in mid-July. Its origin dates back to Babylonian times and it is said to represent the eagle of Zeus in Greek mythology, which carried the thunderbolts that Zeus hurled at his enemies and which snatched up Ganymede to become cup-bearer to the g...

  2. The 2009 MW MW 6.1 L'Aquila fault system imaged by 64k earthquake locations

    International Nuclear Information System (INIS)

    Valoroso, Luisa

    2016-01-01

    On April 6 2009, a MW 6.1 normal-faulting earthquake struck the axial area of the Abruzzo region in central Italy. We investigate the complex architecture and mechanics of the activated fault system by using 64k high-resolution foreshock and aftershock locations. The fault system is composed by two major SW dipping segments forming an en-echelon NW trending system about 50 km long: the high-angle L’Aquila fault and the listric Campotosto fault, located in the first 10 km depth. From the beginning of 2009, fore shocks activated the deepest portion of the main shock fault. A week before the MW 6.1 event, the largest (MW 4.0) foreshock triggered seismicity migration along a minor off-fault segment. Seismicity jumped back to the main plane a few hours before the main shock. High-precision locations allowed to peer into the fault zone showing complex geological structures from the metre to the kilometre scale, analogous to those observed by field studies and seismic profiles. Also, we were able to investigate important aspects of earthquakes nucleation and propagation through the upper crust in carbonate-bearing rocks such as: the role of fluids in normal-faulting earthquakes; how crustal faults terminate at depths; the key role of fault zone structure in the earthquake rupture evolution processes.

  3. New paleoseismic data across the Mt. Marine Fault between the 2016 Amatrice and 2009 L’Aquila seismic sequences (central Apennines

    Directory of Open Access Journals (Sweden)

    Marco Moro

    2016-11-01

    Full Text Available Paleoseismological investigations have been carried out along the Mt. Marine normal fault, a probable source of the February 2, 1703 (Me=6.7 earthquake. The fault affects the area between the 2016 Amatrice and 2009 L’Aquila seismic sequences. Paleoseismological analysis provides data which corroborate previous studies, highlighting the occurrence of 5 events of surface faulting after the 6th–5th millenium B.C., the most recent of which is probably the 2 February 1703 earthquake. A minimum displacement per event of about 0.35 m has been measured. The occurrence of a minimum four faulting events within the last 7,000 years suggests a maximum 1,700 years recurrence interval.

  4. Physical and Transport Property Variations Within Carbonate-Bearing Fault Zones: Insights From the Monte Maggio Fault (Central Italy)

    Science.gov (United States)

    Trippetta, F.; Carpenter, B. M.; Mollo, S.; Scuderi, M. M.; Scarlato, P.; Collettini, C.

    2017-11-01

    The physical characterization of carbonate-bearing normal faults is fundamental for resource development and seismic hazard. Here we report laboratory measurements of density, porosity, Vp, Vs, elastic moduli, and permeability for a range of effective confining pressures (0.1-100 MPa), conducted on samples representing different structural domains of a carbonate-bearing fault. We find a reduction in porosity from the fault breccia (11.7% total and 6.2% connected) to the main fault plane (9% total and 3.5% connected), with both domains showing higher porosity compared to the protolith (6.8% total and 1.1% connected). With increasing confining pressure, P wave velocity evolves from 4.5 to 5.9 km/s in the fault breccia, is constant at 5.9 km/s approaching the fault plane and is low (4.9 km/s) in clay-rich fault domains. We find that while the fault breccia shows pressure sensitive behavior (a reduction in permeability from 2 × 10-16 to 2 × 10-17 m2), the cemented cataclasite close to the fault plane is characterized by pressure-independent behavior (permeability 4 × 10-17 m2). Our results indicate that the deformation processes occurring within the different fault structural domains influence the physical and transport properties of the fault zone. In situ Vp profiles match well the laboratory measurements demonstrating that laboratory data are valuable for implications at larger scale. Combining the experimental values of elastic moduli and frictional properties it results that at shallow crustal levels, M ≤ 1 earthquakes are less favored, in agreement with earthquake-depth distribution during the L'Aquila 2009 seismic sequence that occurred on carbonates.

  5. Fault Risk Assessment of Underwater Vehicle Steering System Based on Virtual Prototyping and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    He Deyu

    2016-09-01

    Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.

  6. Stacking fault growth of FCC crystal: The Monte-Carlo simulation approach

    International Nuclear Information System (INIS)

    Jian Jianmin; Ming Naiben

    1988-03-01

    The Monte-Carlo method has been used to simulate the growth of the FCC (111) crystal surface, on which is presented the outcrop of a stacking fault. The comparison of the growth rates has been made between the stacking fault containing surface and the perfect surface. The successive growth stages have been simulated. It is concluded that the outcrop of stacking fault on the crystal surface can act as a self-perpetuating step generating source. (author). 7 refs, 3 figs

  7. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  8. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  9. SAFTAC, Monte-Carlo Fault Tree Simulation for System Design Performance and Optimization

    International Nuclear Information System (INIS)

    Crosetti, P.A.; Garcia de Viedma, L.

    1976-01-01

    1 - Description of problem or function: SAFTAC is a Monte Carlo fault tree simulation program that provides a systematic approach for analyzing system design, performing trade-off studies, and optimizing system changes or additions. 2 - Method of solution: SAFTAC assumes an exponential failure distribution for basic input events and a choice of either Gaussian distributed or constant repair times. The program views the system represented by the fault tree as a statistical assembly of independent basic input events, each characterized by an exponential failure distribution and, if used, a constant or normal repair distribution. 3 - Restrictions on the complexity of the problem: The program is dimensioned to handle 1100 basic input events and 1100 logical gates. It can be re-dimensioned to handle up to 2000 basic input events and 2000 logical gates within the existing core memory

  10. How Might Draining Lake Campotosto Affect Stress and Seismicity on the Monte Gorzano Normal Fault, Central Italy?

    Science.gov (United States)

    Verdecchia, A.; Deng, K.; Harrington, R. M.; Liu, Y.

    2017-12-01

    It is broadly accepted that large variations of water level in reservoirs may affect the stress state on nearby faults. While most studies consider the relationship between lake impoundment and the occurrence of large earthquakes or seismicity rate increases in the surrounding region, very few examples focus on the effects of lake drainage. The second largest reservoir in Europe, Lake Campotosto, is located on the hanging wall of the Monte Gorzano fault, an active normal fault responsible for at least two M ≥ 6 earthquakes in historical times. The northern part of this fault ruptured during the August 24, 2016, Mw 6.0 Amatrice earthquake, increasing the probability for a future large event on the southern section where an aftershock sequence is still ongoing. The proximity of the Campotosto reservoir to the active fault aroused general concern with respect to the stability of the three dams bounding the reservoir if the southern part of the Monte Gorzano fault produces a moderate earthquake. Local officials have proposed draining the reservoir as hazard mitigation strategy to avoid possible future catastrophes. In efforts to assess how draining the reservoir might affect earthquake nucleation on the fault, we use a finite-element poroelastic model to calculate the evolution of stress and pore pressure in terms of Coulomb stress changes that would be induced on the Monte Gorzano fault by emptying the Lake Campotosto reservoir. Preliminary results show that an instantaneous drainage of the lake will produce positive Coulomb stress changes, mostly on the shallower part of the fault (0 to 2 km), while a stress drop of the order of 0.2 bar is expected on the Monte Gorzano fault between 0 and 8 km depth. Earthquake hypocenters on the southern portion of the fault currently nucleate between 5 and 13 km depth, with activity distributed nearby the reservoir. Upcoming work will model the effects of varying fault geometry and elastic parameters, including geological

  11. Micro-textures of Deformed Gouges by Friction Experiments of Mont Terri Main Fault, Switzerland

    Science.gov (United States)

    Aoki, K.; Seshimo, K.; Sakai, T.; Komine, Y.; Kametaka, M.; Watanabe, T.; Nussbaum, C.; Guglielmi, Y.

    2017-12-01

    Friction experiment was conducted on samples from the Main Fault of Mont Terri Rock Laboratory, Switzerland and then micro-textures of deformed gouges were observed using a scanning electron microscope JCM-6000 and JXA-8530F. Samples were taken at the depths of 47.2m and 37.3m of borehole BSF-1, and at 36.7m, 37.1m, 41.4m and 44.6m of borehole BSF-2, which were drilled from the drift floor at 260m depth from the surface. Friction experiment was conducted on above 6 samples using a rotary shear low to high-velocity friction apparatus at the Institute of Geology, China Earthquake Administration in Beijing at a normal stress of 3.95 to 4.0 MPa and at slip rates ranging 0.2 microns/s to 2.1mm/s. Cylindrical specimens of Ti-Al-V alloy, exhibiting similar behaviors as the host rock specimen, were used as rotary and stationary pistons of 40 mm diameter. A Teflon sleeve was used around the piston to confine the sample during the test. Main results are summarized as follows. 1) Mud rocks in Mont Terri drill holes (BFS-1, BFS-2) had steady-state or nearly steady-state friction coefficient μss in the range of 0.1 0.3 for wet gouges and 0. 5 0.7 for dry gouges. Friction coefficients of dry gouges were approximately twice as large as those of wet gouges. However, the fault rock (37.3 m, BFS-1) with scaly fabric showed no difference between wet and dry conditions : μss (wet): 0.50 0.77, μss (dry): 0.45 0.78. This is probably because the clay contents of this rock is less ( 33 %) than those in other rocks (67 73 %) (Shimamoto, 2017). 2) Deformed gouges are characterized by well-developed slip zones adjacent to the rotary and stationary pistons, accompanied by slickenside surfaces with clear striations. Such slickenside surfaces are similar to those developed in the drill core samples used in our experiments. 3) Multiple slip zones were observed in the 37.3m of BFS-1 and the 36.7m of BFS-2 samples under dry condition, suggesting that a slip occurred in the interior of the gouge

  12. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    KRSTIVOJEVIC, J. P.

    2015-08-01

    Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain.

  13. Geochemical signature of paleofluids in microstructures from Main Fault in the Opalinus Clay of the Mont Terri rock laboratory, Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Clauer, N. [Laboratoire d’Hydrologie et de Géochimie de Strasbourg (CNRS-UdS), Strasbourg (France); Techer, I. [Equipe Associée, Chrome, Université de Nîmes, Nîmes (France); Nussbaum, Ch. [Swiss Geological Survey, Federal Office of Topography Swisstopo, Wabern (Switzerland); Laurich, B. [Structural Geology, Tectonics and Geomechanics, RWTH Aachen University, Aachen (Germany); Laurich, B. [Federal Institute for Geosciences and Natural Resources BGR, Hannover (Germany)

    2017-04-15

    The present study reports on elemental and Sr isotopic analyses of calcite and associated celestite infillings of various microtectonic features collected mostly in the Main Fault of the Opalinus Clay from Mont Terri rock laboratory. Based on a detailed microstructural description of veins, slickensides, scaly clay aggregates and gouges, the geochemical signatures of the infillings were compared to those of the leachates from undeformed Opalinus Clay, and to the calcite from veins crosscutting Hauptrogenstein, Passwang and Staffelegg Formations above and below the Opalinus Clay. Vein calcite and celestite from Main Fault yield identical {sup 87}Sr/{sup 86}Sr ratios that are also close to those recorded in the Opalinus Clay matrix inside the Main Fault, but different from those of the diffuse Opalinus Clay calcite outside the fault. These varied {sup 87}Sr/{sup 86}Sr ratios of the diffuse calcite evidence a lack of interaction among the associated connate waters and the flowing fluids characterized by a homogeneous Sr signature. The {sup 87}Sr/{sup 86}Sr homogeneity at 0.70774 ± 0.00001 (2σ) for the infillings of most microstructures in the Main Fault, as well as of veins from nearby limestone layer and sediments around the Opalinus Clay, claims for an 'infinite' homogeneous marine supply, whereas the gouge infillings apparently interacted with a fluid chemically more complex. According to the known regional paleogeographic evolution, two seawater supplies were inferred and documented in the Delémont Basin: either during the Priabonian (38-34 Ma ago) from western Bresse graben, and/or during the Rupelian (34-28 Ma ago) from northern Rhine Graben. The Rupelian seawater that yields a mean {sup 87}Sr/{sup 86}Sr signature significantly higher than those of the microstructural infillings seems not to be the appropriate source. Alternatively, Priabonian seawater yields a mean {sup 87}Sr/{sup 86}Sr ratio precisely matching that of the leachates from diffuse

  14. The L'Aquila trial

    Science.gov (United States)

    Amato, Alessandro; Cocco, Massimo; Cultrera, Giovanna; Galadini, Fabrizio; Margheriti, Lucia; Nostro, Concetta; Pantosti, Daniela

    2013-04-01

    The first step of the trial in L'Aquila (Italy) ended with a conviction of a group of seven experts to 6 years of jail and several million euros refund for the families of the people who died during the Mw 6.3 earthquake on April 6, 2009. This verdict has a tremendous impact on the scientific community as well as on the way in which scientists deliver their expert opinions to decision makers and society. In this presentation, we describe the role of scientists in charge of releasing authoritative information concerning earthquakes and seismic hazard and the conditions that led to the verdict, in order to discuss whether this trial represented a prosecution to science, and if errors were made in communicating the risk. Documents, articles and comments about the trial are collected in the web site http://processoaquila.wordpress.com/. We will first summarize what was the knowledge about the seismic hazard of the region and the vulnerability of L'Aquila before the meeting of the National Commission for Forecasting and Predicting Great Risks (CGR) held 6 days before the main shock. The basic point of the accusation is that the CGR suggested that no strong earthquake would have occurred (which of course was never mentioned by any seismologist participating to the meeting). This message would have convinced the victims to stay at home, instead of moving out after the M3.9 and M3.5 earthquakes few hours before the mainshock. We will describe how the available scientific information was passed to the national and local authorities, and in general how the Italian scientific Institution in charge of seismic monitoring and research (INGV), the Civil Protection Department (DPC) and the CGR should interact according to the law. As far as the communication and outreach to the public, the scientific Institutions as INGV have the duty to communicate scientific information. Instead, the risk management and the definition of actions for risk reduction is in charge of Civil

  15. PREP KITT, System Reliability by Fault Tree Analysis. PREP, Min Path Set and Min Cut Set for Fault Tree Analysis, Monte-Carlo Method. KITT, Component and System Reliability Information from Kinetic Fault Tree Theory

    International Nuclear Information System (INIS)

    Vesely, W.E.; Narum, R.E.

    1997-01-01

    1 - Description of problem or function: The PREP/KITT computer program package obtains system reliability information from a system fault tree. The PREP program finds the minimal cut sets and/or the minimal path sets of the system fault tree. (A minimal cut set is a smallest set of components such that if all the components are simultaneously failed the system is failed. A minimal path set is a smallest set of components such that if all of the components are simultaneously functioning the system is functioning.) The KITT programs determine reliability information for the components of each minimal cut or path set, for each minimal cut or path set, and for the system. Exact, time-dependent reliability information is determined for each component and for each minimal cut set or path set. For the system, reliability results are obtained by upper bound approximations or by a bracketing procedure in which various upper and lower bounds may be obtained as close to one another as desired. The KITT programs can handle independent components which are non-repairable or which have a constant repair time. Any assortment of non-repairable components and components having constant repair times can be considered. Any inhibit conditions having constant probabilities of occurrence can be handled. The failure intensity of each component is assumed to be constant with respect to time. The KITT2 program can also handle components which during different time intervals, called phases, may have different reliability properties. 2 - Method of solution: The PREP program obtains minimal cut sets by either direct deterministic testing or by an efficient Monte Carlo algorithm. The minimal path sets are obtained using the Monte Carlo algorithm. The reliability information is obtained by the KITT programs from numerical solution of the simple integral balance equations of kinetic tree theory. 3 - Restrictions on the complexity of the problem: The PREP program will obtain the minimal cut and

  16. The August 24th 2016 Accumoli earthquake: surface faulting and Deep-Seated Gravitational Slope Deformation (DSGSD in the Monte Vettore area

    Directory of Open Access Journals (Sweden)

    Domenico Aringoli

    2016-11-01

    Full Text Available On August 24th 2016 a Mw=6.0 earthquake hit central Italy, with the epicenter located at the boundaries between Lazio, Marche, Abruzzi and Umbria regions, near the village of Accumoli (Rieti, Lazio. Immediately after the mainshock, this geological survey has been focused on the earthquake environmental effects related to the tectonic reactivation of the previously mapped active fault (i.e. primary, as well as secondary effects mostly related to the seismic shaking (e.g. landslides and fracturing in soil and rock.This paper brings data on superficial effects and some preliminary considerations about the interaction and possible relationship between surface faulting and the occurrence of Deep-Seated Gravitational Slope Deformation (DSGSD along the southern and western slope of Monte Vettore.

  17. Latest Pleistocene to Holocene thrust faulting paleoearthquakes at Monte Netto (Brescia, Italy): lessons learned from the Middle Ages seismic events in the Po Plain

    Science.gov (United States)

    Michetti, Alessandro Maria; Berlusconi, Andrea; Livio, Franz; Sileo, Giancanio; Zerboni, Andrea; Serva, Leonello; Vittori, Eutizio; Rodnight, Helena; Spötl, Christoph

    2010-05-01

    The seismicity of the Po Plain in Northern Italy is characterized by two strong Middle Ages earthquakes, the 1117, I° X MCS Verona, and the December 25, 1222, I° IX-X Brescia, events. Historical reports from these events describe relevant coseismic environmental effects, such as drainage changes, ground rupture and landslides. Due to the difficult interpretation of intensity data from such old seismic events, considerable uncertainty exists about their source parameters, and therefore about their causative tectonic structures. In a recent review, Stucchi et al. (2008) concluded that 'the historical data do not significantly help to constrain the assessment of the seismogenic potential of the area, which remains one of the most unknown, although potentially dangerous, seismic areas of the Italian region'. This issue needs therefore to be addressed by using the archaeological and geological evidence of past earthquakes, that is, archeoseismology and paleoseismology. Earthquake damage to archaeological sites in the study area has been the subject of several recent papers. Here we focus on new paleoseismological evidence, and in particular on the first observation of Holocene paleoseismic surface faulting in the Po Plain identified at the Monte Netto site, located ca. 10 km S of Brescia, in the area where the highest damage from the Christmas 1222 earthquake have been recorded. Monte Netto is a small hill, ca. 30 m higher than the surrounding piedmont plain, which represent the top of a growing fault-related fold belonging to the Quaternary frontal sector of the Southern Alps; the causative deep structure is a N-verging back thrust, well imaged in the industrial seismic reflection profiles kindly made available by ENI E&P. New trenching investigations have been conducted at the Cava Danesi of Monte Netto in October 2009, focused on the 1:10 scale analysis of the upper part of the 7 m high mid-Pleistocene to Holocene stratigraphic section exposed along the quarry

  18. Geomechanical analysis of excavation-induced rock mass behavior of faulted Opalinus clay at the Mont Terri underground rock laboratory (Switzerland)

    International Nuclear Information System (INIS)

    Thoeny, R.

    2014-01-01

    Clay rock formations are potential host rocks for deep geological disposal of nuclear waste. However, they exhibit relatively low strength and brittle failure behaviour. Construction of underground openings in clay rocks may lead to the formation of an excavation damage zone (EDZ) in the near-field area of the tunnel. This has to be taken into account during risk assessment for waste-disposal facilities. To investigate the geomechanical processes associated with the rock mass response of faulted Opalinus Clay during tunnelling, a full-scale ‘mine-by’ experiment was carried out at the Mont Terri Underground Rock Laboratory (URL) in Switzerland. In the ‘mine-by’ experiment, fracture network characteristics within the experimental section were characterized prior to and after excavation by integrating structural data from geological mapping of the excavation surfaces and from four pre- and post-excavation boreholes.The displacements and deformations in the surrounding rock mass were measured using geo-technical instrumentation including borehole inclinometers, extensometers and deflectometers, together with high-resolution geodetic displacement measurements and laser scanning measurements on the excavation surfaces. Complementary data was gathered from structural and geophysical characterization of the surrounding rock mass. Geological and geophysical techniques were used to analyse the structural and kinematic relationships between the natural and excavation-induced fracture network surrounding the ‘mine-by’ experiment. Integrating the results from seismic refraction tomography, borehole logging, and tunnel surface mapping revealed that spatial variations in fault frequency along the tunnel axis alter the rock mass deformability and strength. Failure mechanisms, orientation and frequency of excavation-induced fractures are significantly influenced by tectonic faults. On the side walls, extensional fracturing tangential to the tunnel circumference was the

  19. Geomechanical analysis of excavation-induced rock mass behavior of faulted Opalinus clay at the Mont Terri underground rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Thoeny, R.

    2014-07-01

    Clay rock formations are potential host rocks for deep geological disposal of nuclear waste. However, they exhibit relatively low strength and brittle failure behaviour. Construction of underground openings in clay rocks may lead to the formation of an excavation damage zone (EDZ) in the near-field area of the tunnel. This has to be taken into account during risk assessment for waste-disposal facilities. To investigate the geomechanical processes associated with the rock mass response of faulted Opalinus Clay during tunnelling, a full-scale ‘mine-by’ experiment was carried out at the Mont Terri Underground Rock Laboratory (URL) in Switzerland. In the ‘mine-by’ experiment, fracture network characteristics within the experimental section were characterized prior to and after excavation by integrating structural data from geological mapping of the excavation surfaces and from four pre- and post-excavation boreholes.The displacements and deformations in the surrounding rock mass were measured using geo-technical instrumentation including borehole inclinometers, extensometers and deflectometers, together with high-resolution geodetic displacement measurements and laser scanning measurements on the excavation surfaces. Complementary data was gathered from structural and geophysical characterization of the surrounding rock mass. Geological and geophysical techniques were used to analyse the structural and kinematic relationships between the natural and excavation-induced fracture network surrounding the ‘mine-by’ experiment. Integrating the results from seismic refraction tomography, borehole logging, and tunnel surface mapping revealed that spatial variations in fault frequency along the tunnel axis alter the rock mass deformability and strength. Failure mechanisms, orientation and frequency of excavation-induced fractures are significantly influenced by tectonic faults. On the side walls, extensional fracturing tangential to the tunnel circumference was the

  20. Deformation mechanisms and evolution of the microstructure of gouge in the Main Fault in Opalinus Clay in the Mont Terri rock laboratory (CH)

    Science.gov (United States)

    Laurich, Ben; Urai, Janos L.; Vollmer, Christian; Nussbaum, Christophe

    2018-01-01

    We studied gouge from an upper-crustal, low-offset reverse fault in slightly overconsolidated claystone in the Mont Terri rock laboratory (Switzerland). The laboratory is designed to evaluate the suitability of the Opalinus Clay formation (OPA) to host a repository for radioactive waste. The gouge occurs in thin bands and lenses in the fault zone; it is darker in color and less fissile than the surrounding rock. It shows a matrix-based, P-foliated microfabric bordered and truncated by micrometer-thin shear zones consisting of aligned clay grains, as shown with broad-ion-beam scanning electron microscopy (BIB-SEM) and optical microscopy. Selected area electron diffraction based on transmission electron microscopy (TEM) shows evidence for randomly oriented nanometer-sized clay particles in the gouge matrix, surrounding larger elongated phyllosilicates with a strict P foliation. For the first time for the OPA, we report the occurrence of amorphous SiO2 grains within the gouge. Gouge has lower SEM-visible porosity and almost no calcite grains compared to the undeformed OPA. We present two hypotheses to explain the origin of gouge in the Main Fault: (i) authigenic generation consisting of fluid-mediated removal of calcite from the deforming OPA during shearing and (ii) clay smear consisting of mechanical smearing of calcite-poor (yet to be identified) source layers into the fault zone. Based on our data we prefer the first or a combination of both, but more work is needed to resolve this. Microstructures indicate a range of deformation mechanisms including solution-precipitation processes and a gouge that is weaker than the OPA because of the lower fraction of hard grains. For gouge, we infer a more rate-dependent frictional rheology than suggested from laboratory experiments on the undeformed OPA.

  1. Non-inductive components of electromagnetic signals associated with L'Aquila earthquake sequences estimated by means of inter-station impulse response functions

    Directory of Open Access Journals (Sweden)

    C. Di Lorenzo

    2011-04-01

    Full Text Available On 6 April 2009 at 01:32:39 UT a strong earthquake occurred west of L'Aquila at the very shallow depth of 9 km. The main shock local magnitude was Ml = 5.8 (Mw = 6.3. Several powerful aftershocks occurred the following days. The epicentre of the main shock occurred 6 km away from the Geomagnetic Observatory of L'Aquila, on a fault 15 km long having a NW-SE strike, about 140°, and a SW dip of about 42°. For this reason, L'Aquila seismic events offered very favourable conditions to detect possible electromagnetic emissions related to the earthquake. The data used in this work come from the permanent geomagnetic Observatories of L'Aquila and Duronia. Here the results concerning the analysis of the residual magnetic field estimated by means of the inter-station impulse response functions in the frequency band from 0.3 Hz to 3 Hz are shown.

  2. Precursory slow-slip loaded the 2009 L'Aquila earthquake sequence

    Science.gov (United States)

    Borghi, A.; Aoudia, A.; Javed, F.; Barzaghi, R.

    2016-05-01

    Slow-slip events (SSEs) are common at subduction zone faults where large mega earthquakes occur. We report here that one of the best-recorded moderate size continental earthquake, the 2009 April 6 moment magnitude (Mw) 6.3 L'Aquila (Italy) earthquake, was preceded by a 5.9 Mw SSE that originated from the decollement beneath the reactivated normal faulting system. The SSE is identified from a rigorous analysis of continuous GPS stations and occurred on the 12 February and lasted for almost two weeks. It coincided with a burst in the foreshock activity with small repeating earthquakes migrating towards the main-shock hypocentre as well as with a change in the elastic properties of rocks in the fault region. The SSE has caused substantial stress loading at seismogenic depths where the magnitude 4.0 foreshock and Mw 6.3 main shock nucleated. This stress loading is also spatially correlated with the lateral extent of the aftershock sequence.

  3. Testimonies to the L'Aquila earthquake (2009) and to the L'Aquila process

    Science.gov (United States)

    Kalenda, Pavel; Nemec, Vaclav

    2014-05-01

    Lot of confusions, misinformation, false solidarity, efforts to misuse geoethics and other unethical activities in favour of the top Italian seismologists responsible for a bad and superficial evaluation of the situation 6 days prior to the earthquake - that is a general characteristics for the whole period of 5 years separating us from the horrible morning of April 6, 2009 in L'Aquila with 309 human victims. The first author of this presentation as a seismologist had unusual opportunity to visit the unfortunate city in April 2009. He got all "first-hand" information that a real scientifically based prediction did exist already for some shocks in the area on March 29 and 30, 2009. The author of the prediction Gianpaolo Giuliani was obliged to stop any public information diffused by means of internet. A new prediction was known to him on March 31 - in the day when the "Commission of Great Risks" offered a public assurance that any immediate earthquake can be practically excluded. In reality the members of the commission completely ignored such a prediction declaring it as a false alarm of "somebody" (even without using the name of Giuliani). The observations by Giuliani were of high quality from the scientific point of view. G. Giuliani predicted L'Aquila earthquake in the professional way - for the first time during many years of observations. The anomalies, which preceded L'Aquila earthquake were detected on many places in Europe in the same time. The question is, what locality would be signed as potential focal area, if G. Giuliani would know the other observations in Europe. The deformation (and other) anomalies are observable before almost all of global M8 earthquakes. Earthquakes are preceded by deformation and are predictable. The testimony of the second author is based on many unfortunate personal experiences with representatives of the INGV Rome and their supporters from India and even Australia. In July 2010, prosecutor Fabio Picuti charged the Commission

  4. Measuring a truncated disk in Aquila X-1

    DEFF Research Database (Denmark)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe Kα line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner r...

  5. Mental health in L'Aquila after the earthquake

    Directory of Open Access Journals (Sweden)

    Paolo Stratta

    2012-06-01

    Full Text Available INTRODUCTION: In the present work we describe the mental health condition of L'Aquila population in the aftermath of the earthquake in terms of structural, process and outcome perspectives. METHOD: Literature revision of the published reports on the L'Aquila earthquake has been performed. RESULTS: Although important psychological distress has been reported by the population, capacity of resilience can be observed. However if resilient mechanisms intervened in immediate aftermath of the earthquake, important dangers are conceivable in the current medium-long-term perspective due to the long-lasting alterations of day-to-day life and the disruption of social networks that can be well associated with mental health problems. CONCLUSIONS: In a condition such as an earthquake, the immediate physical, medical, and emergency rescue needs must be addressed initially. However training first responders to identify psychological distress symptoms would be important for mental health triage in the field.

  6. Nova Aquilae 1982: the nature of its dust

    International Nuclear Information System (INIS)

    Longmore, A.J.; Williams, P.M.

    1984-01-01

    Infrared photometric measurements of Nova Aquilae 1982, covering a period from 37 to 261 days after its discovery, have been obtained. Thermal emission was present even from the first observation. The observations show that the conventional picture of dust forming in the nova ejecta does not apply in this case, and suggest that a re-examination of the infrared modelling of earlier novae would be worthwhile. (U.K.)

  7. Ultraviolet Spectroscopic Study of BY Circini and V 1425 Aquilae ...

    Indian Academy of Sciences (India)

    ferent authors (Woudt & Warner 2003; Bateson & McIntosh 1998; Johnson et al. 1997; Greeley et al. 1995; Evans & Yudin 1995; Cooper et al. 1995; Gilmore et al. 1995; Liller et al. 1995). Nova Aquilae 1995 was discovered on 1995 February 7 by Nakano et al. (1995), with an orbital period of 6.14 h (Retter et al. 1998a) at a ...

  8. Lessons of L'Aquila for Operational Earthquake Forecasting

    Science.gov (United States)

    Jordan, T. H.

    2012-12-01

    The L'Aquila earthquake of 6 Apr 2009 (magnitude 6.3) killed 309 people and left tens of thousands homeless. The mainshock was preceded by a vigorous seismic sequence that prompted informal earthquake predictions and evacuations. In an attempt to calm the population, the Italian Department of Civil Protection (DPC) convened its Commission on the Forecasting and Prevention of Major Risk (MRC) in L'Aquila on 31 March 2009 and issued statements about the hazard that were widely received as an "anti-alarm"; i.e., a deterministic prediction that there would not be a major earthquake. On October 23, 2012, a court in L'Aquila convicted the vice-director of DPC and six scientists and engineers who attended the MRC meeting on charges of criminal manslaughter, and it sentenced each to six years in prison. A few weeks after the L'Aquila disaster, the Italian government convened an International Commission on Earthquake Forecasting for Civil Protection (ICEF) with the mandate to assess the status of short-term forecasting methods and to recommend how they should be used in civil protection. The ICEF, which I chaired, issued its findings and recommendations on 2 Oct 2009 and published its final report, "Operational Earthquake Forecasting: Status of Knowledge and Guidelines for Implementation," in Aug 2011 (www.annalsofgeophysics.eu/index.php/annals/article/view/5350). As defined by the Commission, operational earthquake forecasting (OEF) involves two key activities: the continual updating of authoritative information about the future occurrence of potentially damaging earthquakes, and the officially sanctioned dissemination of this information to enhance earthquake preparedness in threatened communities. Among the main lessons of L'Aquila is the need to separate the role of science advisors, whose job is to provide objective information about natural hazards, from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms

  9. Assessment of lead exposure in Spanish imperial eagle (Aquila adalberti) from spent ammunition in central Spain

    Science.gov (United States)

    Fernandez, Julia Rodriguez-Ramos; Hofle, Ursula; Mateo, Rafael; de Francisco, Olga Nicolas; Abbott, Rachel; Acevedo, Pelayo; Blanco, Juan-Manuel

    2011-01-01

    The Spanish imperial eagle (Aquila adalberti) is found only in the Iberian Peninsula and is considered one of the most threatened birds of prey in Europe. Here we analyze lead concentrations in bones (n = 84), livers (n = 15), primary feathers (n = 69), secondary feathers (n = 71) and blood feathers (n = 14) of 85 individuals collected between 1997 and 2008 in central Spain. Three birds (3.6%) had bone lead concentration > 20 (mu or u)g/g and all livers were within background lead concentration. Bone lead concentrations increased with the age of the birds and were correlated with lead concentration in rachis of secondary feathers. Spatial aggregation of elevated bone lead concentration was found in some areas of Montes de Toledo. Lead concentrations in feathers were positively associated with the density of large game animals in the area where birds were found dead or injured. Discontinuous lead exposure in eagles was evidenced by differences in lead concentration in longitudinal portions of the rachis of feathers.

  10. Lessons Learned from L'Aquila Trial for Scientists' Communication

    Science.gov (United States)

    Koketsu, K.; Cerase, A.; Amato, A.; Oki, S.

    2017-12-01

    The Appeal and Supreme Courts of Italy concluded that there was no bad communication by defendants except for the "glass of wine interview" which was made by a government official before the scientists' meeting. This meeting was held 6 days before the 2009 L'Aquila earthquake to discuss the outlook for the seismic activity in the L'Aquila area. However, at least two TV stations and a newspaper reported the content of the "glass of wine interview" in the next morning as it was announced by the defendant scientists. The reports triggered a domino effect of misinterpretations, which may be well acknowledged in the light of the social amplification of risk framework. These TV stations and newspaper should be also considered responsible for the bad communication. This point was missing in the sentence documents by the Appeal and Supreme Courts. Therefore, for scientists, a lesson of communication, especially during a seismic hazard crisis, is that they must carefully craft their messages and the way they circulate, both in broadcast and digital media, and follow reports released by the media on their activities. As another lesson, scientists must be aware that key concepts of safety such as "no danger" and "favorable situation", which were used in the "glass of wine interview", and the idea of probability can have different meanings for scientists, media, and citizens.

  11. Structural damages of L'Aquila (Italy earthquake

    Directory of Open Access Journals (Sweden)

    H. Kaplan

    2010-03-01

    Full Text Available On 6 April 2009 an earthquake of magnitude 6.3 occurred in L'Aquila city, Italy. In the city center and surrounding villages many masonry and reinforced concrete (RC buildings were heavily damaged or collapsed. After the earthquake, the inspection carried out in the region provided relevant results concerning the quality of the materials, method of construction and the performance of the structures. The region was initially inhabited in the 13th century and has many historic structures. The main structural materials are unreinforced masonry (URM composed of rubble stone, brick, and hollow clay tile. Masonry units suffered the worst damage. Wood flooring systems and corrugated steel roofs are common in URM buildings. Moreover, unconfined gable walls, excessive wall thicknesses without connection with each other are among the most common deficiencies of poorly constructed masonry structures. These walls caused an increase in earthquake loads. The quality of the materials and the construction were not in accordance with the standards. On the other hand, several modern, non-ductile concrete frame buildings have collapsed. Poor concrete quality and poor reinforcement detailing caused damage in reinforced concrete structures. Furthermore, many structural deficiencies such as non-ductile detailing, strong beams-weak columns and were commonly observed. In this paper, reasons why the buildings were damaged in the 6 April 2009 earthquake in L'Aquila, Italy are given. Some suggestions are made to prevent such disasters in the future.

  12. Calibration and performance testing of the IAEA Aquila Active Well Coincidence Counter (Unit 1)

    International Nuclear Information System (INIS)

    Menlove, H.O..; Siebelist, R.; Wenz, T.R.

    1996-01-01

    An Active Well Coincidence Counter (AWCC) and a portable shift register (PSR-B) produced by Aquila Technologies Group, Inc., have been tested and cross-calibrated with existing AWCCs used by the International Atomic Energy Agency (IAEA). This report summarizes the results of these tests and the cross-calibration of the detector. In addition, updated tables summarizing the cross-calibration of existing AWCCs and AmLi sources are also included. Using the Aquila PSR-B with existing IAEA software requires secondary software also supplied by Aquila to set up the PSR-B with the appropriate measurement parameters

  13. The LVD signals during the early-mid stages of the L'Aquila seismic sequence and the radon signature of some aftershocks of moderate magnitude

    International Nuclear Information System (INIS)

    Cigolini, C.; Laiolo, M.; Coppola, D.

    2015-01-01

    The L'Aquila seismic swarm culminated with the mainshock of April 6, 2009 (M L = 5.9). Here, we report and analyze the Large Volume Detector (LVD, used in neutrinos research) low energy traces (∼0.8 MeV), collected during the early-mid stages of the seismic sequence, together with the data of a radon monitoring experiment. The peaks of LVD traces do not correlate with the evolution and magnitude of earthquakes, including major aftershocks. Conversely, our radon measurements obtained by utilizing three automatic stations deployed along the regional NW–SE faulting system, seem to be, in one case, more efficient. In fact, the timeseries collected on the NW–SE Paganica fracture recorded marked variations and peaks that occurred during and prior moderate aftershocks (with M L > 3). The Paganica monitoring station (PGN) seems to better responds to active seismicity due to the fact that the radon detector was placed directly within the bedrock of an active fault. It is suggested that future networks for radon monitoring of active seismicity should preferentially implement this setting. - Highlights: • The April 9, 2009 Aquila earthquake (ML 5.9) had a remarkable echo in the media. • We report LVD traces together with the data of a radon monitoring experiment. • Radon emissions were measured by 3 automatic stations along the main NW–SE fault. • The one that better responds to seismicity was placed in the fault's bedrock. • Future networks for earthquake radon monitoring should implement this setting

  14. Measuring a Truncated Disk in Aquila X-1

    Science.gov (United States)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix; hide

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation.

  15. Imaging the Crust in the Northern Sector of the 2009 L'Aquila Seismic Sequence through Oil Exploration Data Interpretation

    Science.gov (United States)

    Grazia Ciaccio, Maria; Improta, Luigi; Patacca, Etta; Scandone, Paolo; Villani, Fabio

    2010-05-01

    The 2009 L'Aquila seismic sequence activated a complex, about 40 km long, NW-trending and SW-dipping normal fault system, consisting of three main faults arranged in right-lateral en-echelon geometry. While the northern sector of the epicentral area was extensively investigated by oil companies, only a few scattered, poor-quality commercial seismic profiles are available in the central and southern sector. In this study we interpret subsurface commercial data from the northern sector, which is the area where is located the source of the strong Mw5.4 aftershock occurred on the 9th April 2009. Our primary goals are: (1) to define a reliable framework of the upper crust structure, (2) to investigate how the intense aftershock activity, the bulk of which is clustered in the 5-10 km depth range, relates to the Quaternary extensional faults present in the area. The investigated area lies between the western termination of the W-E trending Gran Sasso thrust system to the south, the SW-NE trending Mt. Sibillini thrust front (Ancona-Anzio Line Auctt.) to the north and west, and by the NNW-SSE trending, SW-dipping Mt. Gorzano normal fault to the east. In this area only middle-upper Miocene deposits are exposed (Laga Flysch and underlying Cerrogna Marl), but commercial wells have revealed the presence of a Triassic-Miocene sedimentary succession identical to the well known Umbria-Marche stratigraphic sequence. We have analyzed several confidential seismic reflection profiles, mostly provided by ENI oil company. Seismic lines are tied to two public wells, 5766 m and 2541 m deep. Quality of the reflection imaging is highly variable. A few good quality stack sections contain interpretable signal down to 4.5-5.5 s TWT, corresponding to depths exceeding 10-12 km and thus allowing crustal imaging at seismogenic depths. Key-reflectors for the interpretation correspond to: (1) the top of the Miocene Cerrogna marls, (2) the top of the Upper Albian-Oligocene Scaglia Group, (3) the

  16. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    Science.gov (United States)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  17. Spectroscopic follow-up of the Hercules-Aquila Cloud

    Science.gov (United States)

    Simion, Iulia T.; Belokurov, Vasily; Koposov, Sergey E.; Sheffield, Allyson; Johnston, Kathryn V.

    2018-05-01

    We designed a follow-up program to find the spectroscopic properties of the Hercules-Aquila Cloud (HAC) and test scenarios for its formation. We measured the radial velocities (RVs) of 45 RR Lyrae in the southern portion of the HAC using the facilities at the MDM observatory, producing the first large sample of velocities in the HAC. We found a double-peaked distribution in RVs, skewed slightly to negative velocities. We compared both the morphology of HAC projected on to the plane of the sky and the distribution of velocities in this structure outlined by RR Lyrae and other tracer populations at different distances to N-body simulations. We found that the behaviour is characteristic of an old, well-mixed accretion event with small apo-galactic radius. We cannot yet rule out other formation mechanisms for the HAC. However, if our interpretation is correct, HAC represents just a small portion of a much larger debris structure spread throughout the inner Galaxy whose distinct kinematic structure should be apparent in RV studies along many lines of sight.

  18. Strong foreshock signal preceding the L'Aquila (Italy earthquake (Mw 6.3 of 6 April 2009

    Directory of Open Access Journals (Sweden)

    G. Minadakis

    2010-01-01

    Full Text Available We used the earthquake catalogue of INGV extending from 1 January 2006 to 30 June 2009 to detect significant changes before and after the 6 April 2009 L'Aquila mainshock (Mw=6.3 in the seismicity rate, r (events/day, and in b-value. The statistical z-test and Utsu-test were applied to identify significant changes. From the beginning of 2006 up to the end of October 2008 the activity was relatively stable and remained in the state of background seismicity (r=1.14, b=1.09. From 28 October 2008 up to 26 March 2009, r increased significantly to 2.52 indicating weak foreshock sequence; the b-value did not changed significantly. The weak foreshock sequence was spatially distributed within the entire seismogenic area. In the last 10 days before the mainshock, strong foreshock signal became evident in space (dense epicenter concentration in the hanging-wall of the Paganica fault, in time (drastic increase of r to 21.70 events/day and in size (b-value dropped significantly to 0.68. The significantly high seismicity rate and the low b-value in the entire foreshock sequence make a substantial difference from the background seismicity. Also, the b-value of the strong foreshock stage (last 10 days before mainshock was significantly lower than that in the aftershock sequence. Our results indicate the important value of the foreshock sequences for the prediction of the mainshock.

  19. Rare event simulation for dynamic fault trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  20. Rare Event Simulation for Dynamic Fault Trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette; Tonetta, Stefano; Schoitsch, Erwin; Bitsch, Friedemann

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  1. Survey for hemoparasites in imperial eagles (Aquila heliaca), steppe eagles (Aquila nipalensis), and white-tailed sea eagles (Haliaeetus albicilla) from Kazakhstan.

    Science.gov (United States)

    Leppert, Lynda L; Layman, Seth; Bragin, Evgeny A; Katzner, Todd

    2004-04-01

    Prevalence of hemoparasites has been investigated in many avian species throughout Europe and North America. Basic hematologic surveys are the first step toward evaluating whether host-parasite prevalences observed in North America and Europe occur elsewhere in the world. We collected blood smears from 94 nestling imperial eagles (Aquila heliaca), five nestling steppe eagles (Aquila nipalensis), and 14 nestling white-tailed sea eagles (Haliaeetus albicilla) at Naurzum Zapovednik (Naurzum National Nature Reserve) in Kazakhstan during the summers of 1999 and 2000. In 1999, six of 29 imperial eagles were infected with Lencocytozoon toddi. Five of 65 imperial eagles and one of 14 white-tailed sea eagle were infected with L. toddi in 2000. Furthermore, in 2000, one of 65 imperial eagles was infected with Haemoproteus sp. We found no parasites in steppe eagles in either year, and no bird had multiple-species infections. These data are important because few hematologic studies of these eagle species have been conducted.

  2. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  3. Near-infrared and optical studies of the highly obscured nova V1831 Aquilae (Nova Aquilae 2015)

    Science.gov (United States)

    Banerjee, D. P. K.; Srivastava, Mudit K.; Ashok, N. M.; Munari, U.; Hambsch, F.-J.; Righetti, G. L.; Maitan, A.

    2018-01-01

    Near-infrared (NIR) and optical photometry and spectroscopy are presented for the nova V1831 Aquilae, covering the early decline and dust-forming phases during the first ∼90 d after its discovery. The nova is highly reddened due to interstellar extinction. Based solely on the nature of the NIR spectrum, we are able to classify the nova to be of the Fe II class. The distance and extinction to the nova are estimated to be 6.1 ± 0.5 kpc and Av ∼ 9.02, respectively. Lower limits of the electron density, emission measure and ionized ejecta mass are made from a Case B analysis of the NIR Brackett lines, while the neutral gas mass is estimated from the optical [O I] lines. We discuss the cause of the rapid strengthening of the He I 1.0830-μm line during the early stages. V1831 Aql formed a modest amount of dust fairly early (∼19.2 d after discovery); the dust shell is not seen to be optically thick. Estimates of the dust temperature, dust mass and grain size are made. Dust formation commences around day 19.2 at a condensation temperature of 1461 ± 15 K, suggestive of a carbon composition, following which the temperature is seen to decrease gradually to 950 K. The dust mass shows a rapid initial increase, which we interpret as being due to an increase in the number of grains, followed by a period of constancy, suggesting the absence of grain destruction processes during this latter time. A discussion of the evolution of these parameters is made, including certain peculiarities seen in the grain radius evolution.

  4. The Academic Impact of Natural Disasters: Evidence from L'Aquila Earthquake

    Science.gov (United States)

    Di Pietro, Giorgio

    2018-01-01

    This paper uses a standard difference-in-differences approach to examine the effect of the L'Aquila earthquake on the academic performance of the students of the local university. The empirical results indicate that this natural disaster reduced students' probability of graduating on-time and slightly increased students' probability of dropping…

  5. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  6. Microstructural investigations on carbonate fault core rocks in active extensional fault zones from the central Apennines (Italy)

    Science.gov (United States)

    Cortinovis, Silvia; Balsamo, Fabrizio; Storti, Fabrizio

    2017-04-01

    The study of the microstructural and petrophysical evolution of cataclasites and gouges has a fundamental impact on both hydraulic and frictional properties of fault zones. In the last decades, growing attention has been payed to the characterization of carbonate fault core rocks due to the nucleation and propagation of coseismic ruptures in carbonate successions (e.g., Umbria-Marche 1997, L'Aquila 2009, Amatrice 2016 earthquakes in Central Apennines, Italy). Among several physical parameters, grain size and shape in fault core rocks are expected to control the way of sliding along the slip surfaces in active fault zones, thus influencing the propagation of coseismic ruptures during earthquakes. Nevertheless, the role of grain size and shape distribution evolution in controlling the weakening or strengthening behavior in seismogenic fault zones is still not fully understood also because a comprehensive database from natural fault cores is still missing. In this contribution, we present a preliminary study of seismogenic extensional fault zones in Central Apennines by combining detailed filed mapping with grain size and microstructural analysis of fault core rocks. Field mapping was aimed to describe the structural architecture of fault systems and the along-strike fault rock distribution and fracturing variations. In the laboratory we used a Malvern Mastersizer 3000 granulometer to obtain a precise grain size characterization of loose fault rocks combined with sieving for coarser size classes. In addition, we employed image analysis on thin sections to quantify the grain shape and size in cemented fault core rocks. The studied fault zones consist of an up to 5-10 m-thick fault core where most of slip is accommodated, surrounded by a tens-of-meters wide fractured damage zone. Fault core rocks consist of (1) loose to partially cemented breccias characterized by different grain size (from several cm up to mm) and variable grain shape (from very angular to sub

  7. Young Researcher Meeting, L'Aquila 2015

    International Nuclear Information System (INIS)

    Agostini, F; Antolini, C; Bossa, M; Cattani, G; Dell'Oro, S; D'Angelo, M; Di Stefano, M; Fragione, G; Migliaccio, M; Pagnanini, L; Pietrobon, D; Pusceddu, E; Serra, M; Stellato, F

    2016-01-01

    The Young Researcher Meeting (www.yrmr.it) has been established as a forum for students, postdoctoral fellows and young researchers determined to play a proactive role in the scientific progress. Since 2009, we run itinerant, yearly meetings to discuss the most recent developments and achievements in physics, as we are firmly convinced that sharing expertise and experience is the foundation of the research activity. One of the main purposes of the conference is actually to create an international network of young researchers, both experimentalists and theorists, and fruitful collaborations across the different branches of physics. The format we chose is an informal meeting primarily aimed at students and postdoctoral researchers at the beginning of their scientific career, who are encouraged to present their work in brief presentations able to provide genuine engagement of the audience and cross-pollination of ideas. The sixth edition of the Young Researcher Meeting was held at the Gran Sasso Science Institute (GSSI), L'Aquila. The high number of valuable contributions gave rise to a busy program for a two-day conference on October 12 th -13 th . The event gathered 70 participants from institutes all around the world. The plenary talk sessions covered several areas of pure and applied physics, and they were complemented by an extremely rich and interactive poster session. This year's edition of the meeting also featured a lectio magistralis by Prof. E. Coccia, director of the GSSI, who discussed the frontiers in gravitational wave physics, commemorating the International Year of Light on the centenary of Einstein's theory of general relativity. On October 14 th , the participants to the conference took part to a guided tour of the Gran Sasso National Laboratories (LNGS), one of the major particle physics laboratories in the world. In this volume, we collect part of the contributions that have been presented at the conference as either talks or

  8. The earthquake lights (EQL of the 6 April 2009 Aquila earthquake, in Central Italy

    Directory of Open Access Journals (Sweden)

    C. Fidani

    2010-05-01

    Full Text Available A seven-month collection of testimonials about the 6 April 2009 earthquake in Aquila, Abruzzo region, Italy, was compiled into a catalogue of non-seismic phenomena. Luminous phenomena were often reported starting about nine months before the strong shock and continued until about five months after the shock. A summary and list of the characteristics of these sightings was made according to 20th century classifications and a comparison was made with the Galli outcomes. These sightings were distributed over a large area around the city of Aquila, with a major extension to the north, up to 50 km. Various earthquake lights were correlated with several landscape characteristics and the source and dynamic of the earthquake. Some preliminary considerations on the location of the sightings suggest a correlation between electrical discharges and asperities, while flames were mostly seen along the Aterno Valley.

  9. L'Aquila's reconstruction challenges: has Italy learned from its previous earthquake disasters?

    Science.gov (United States)

    Ozerdem, Alpaslan; Rufini, Gianni

    2013-01-01

    Italy is an earthquake-prone country and its disaster emergency response experiences over the past few decades have varied greatly, with some being much more successful than others. Overall, however, its reconstruction efforts have been criticised for being ad hoc, delayed, ineffective, and untargeted. In addition, while the emergency relief response to the L'Aquila earthquake of 6 April 2009-the primary case study in this evaluation-seems to have been successful, the reconstruction initiative got off to a very problematic start. To explore the root causes of this phenomenon, the paper argues that, owing to the way in which Italian Prime Minister Silvio Berlusconi has politicised the process, the L'Aquila reconstruction endeavour is likely to suffer problems with local ownership, national/regional/municipal coordination, and corruption. It concludes with a set of recommendations aimed at addressing the pitfalls that may confront the L'Aquila reconstruction process over the next few years. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  10. The dynamic behaviour of the mammoth in the Spanish fortress, L’Aquila, Italy

    Directory of Open Access Journals (Sweden)

    Casarin Filippo

    2015-01-01

    Full Text Available The fossil remains of a “Mammuthus Meridionalis” were found the 25th of march 1954 in a lime quarry close to the city of L’Aquila. The Mammoth skeleton was soon “reconstructed” on a forged iron frame, and it was located in one of the main halls of the Spanish fortress in L’Aquila. A comprehensive restoration was recently completed (2013-2015, also considering the study of the adequacy of the supporting frame, which demonstrated to survive the relevant 2009 l’Aquila earthquake. After a laser-scanner survey, allowing to build a very detailed Finite Element model, Operational Modal Analysis was employed in order to obtain the dynamic identification of the structure. Results of the experimental activities explained the capacity of the structure to bear the 2009 main shock, since the natural frequencies demonstrated to be quite reduced. The structure acted as a “natural” seismic device, avoiding to reach its Ultimate Limit State however paying the toll of relevant displacements. The seismic motion caused several cracks at the edge of the bones, indicating the non-fulfilment of the ALS (damage Limit State of Artistic contents. A proposal for seismic isolation and redesign of the supporting frame was then discussed. The paper illustrates the scientific activities assisting the restoration intervention, entailing a multidisciplinary approach, in the fields of restoration, palaeontology and seismic engineering.

  11. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  12. Fault-Related Sanctuaries

    Science.gov (United States)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  13. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  14. Menopon gaillinae lice in the golden eagle (Aquila chrysaetos and Marsh harear (Circus aeruginosus in Najaf province, Iraq

    Directory of Open Access Journals (Sweden)

    Al-Fatlawi M. A. A

    2017-07-01

    Full Text Available Our study considered as the first work on ectoparasites of the Golden eagle (Aquila chrysaetos and Marsh harear (Circus aeruginosus in Iraq. Overall, we examined 17 eagles for the period from 01\\Nov\\2016 until 25\\Feb\\2017, out of which 4were found infected (23.5%. All infected birds were female. Aquila was hunted from Najaf sea area. Under the wing and between feathers of Aquila grossly examined for detect any parasites. Lice of genus Menopon gaillinae isolated from 4 eagles, from under the wing area. Infected eagles suffering from skin redness. 38 parasites isolated from infected eagle, we prepared a slide from these louse for spp. classification. This study was on the first hand record of shaft louse (M. gallinae in Golden eagle and Marsh harear in Iraq

  15. L'Aquila 1962. "Alternative Attuali" e l'idea di "mostra-saggio"

    Directory of Open Access Journals (Sweden)

    Nicoletti, Luca Pietro

    2015-10-01

    Full Text Available Nel 1962 Enrico Crispolti inaugura, presso il Forte Cinquecentesco dell'Aquila, la prima edizione della rassegna "Alternative Attuali", proponendo un nuovo modello di mostra collettiva: non una semplice rassegna di artisti diversi, ma la proposta di un dialogo fra diverse posizioni (le "alternative" volte al superamento dell'Informale. Con il modello della "mostra-saggio", arricchita da un dibattito in catalogo, veniva per la prima volta applicata una idea espositiva che mettesse in prospettiva critica (e in proiezione storica la situazione presente.

  16. Aquila Remotely Piloted Vehicle System Technology Demonstration (RPV-STD) Program. Volume 3. Field Test Program

    Science.gov (United States)

    1979-04-01

    FLIGHT TESTS Tis 8ootion sumarizes ech of the Crows Landln Flight Tests, hrm I to It Deoemiber 1975. 23 2.4.1 Flight 1 Aquila RPV 001 took off at 09.42...RC pilot In the stablied RC mode. To facilitate theme attempts, an automobile , with Its headlights on high beam, was positioned on each side of the...the vans. At approxi- mately 2 to 3 km, the actual automobile headlights would become visible. Then, the operator would attempt to reposition the RPV

  17. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  18. Stress and Strain Rates from Faults Reconstructed by Earthquakes Relocalization

    Science.gov (United States)

    Morra, G.; Chiaraluce, L.; Di Stefano, R.; Michele, M.; Cambiotti, G.; Yuen, D. A.; Brunsvik, B.

    2017-12-01

    Recurrence of main earthquakes on the same fault depends on kinematic setting, hosting lithologies and fault geometry and population. Northern and central Italy transitioned from convergence to post-orogenic extension. This has produced a unique and very complex tectonic setting characterized by superimposed normal faults, crossing different geologic domains, that allows to investigate a variety of seismic manifestations. In the past twenty years three seismic sequences (1997 Colfiorito, 2009 L'Aquila and 2016-17 Amatrice-Norcia-Visso) activated a 150km long normal fault system located between the central and northern apennines and allowing the recordings of thousands of seismic events. Both the 1997 and the 2009 main shocks were preceded by a series of small pre-shocks occurring in proximity to the future largest events. It has been proposed and modelled that the seismicity pattern of the two foreshocks sequences was caused by active dilatancy phenomenon, due to fluid flow in the source area. Seismic activity has continued intensively until three events with 6.0

  19. Mortality in the l'aquila (central Italy) earthquake of 6 april 2009.

    Science.gov (United States)

    Alexander, David; Magni, Michele

    2013-01-07

    This paper presents the results of an analysis of data on mortality in the magnitude 6.3 earthquake that struck the central Italian city and province of L'Aquila during the night of 6 April 2009. The aim is to create a profile of the deaths in terms of age, gender, location, behaviour during the tremors, and other aspects. This could help predict the pattern of casualties and priorities for protection in future earthquakes. To establish a basis for analysis, the literature on seismic mortality is surveyed. The conclusions of previous studies are synthesised regarding patterns of mortality, entrapment, survival times, self-protective behaviour, gender and age. These factors are investigated for the data set covering the 308 fatalities in the L'Aquila earthquake, with help from interview data on behavioural factors obtained from 250 survivors. In this data set, there is a strong bias towards victimisation of young people, the elderly and women. Part of this can be explained by geographical factors regarding building performance: the rest of the explanation refers to the vulnerability of the elderly and the relationship between perception and action among female victims, who tend to be more fatalistic than men and thus did not abandon their homes between a major foreshock and the main shock of the earthquake, three hours later. In terms of casualties, earthquakes commonly discriminate against the elderly and women. Age and gender biases need further investigation and should be taken into account in seismic mitigation initiatives.

  20. L’Aquila Smart Clean Air City: The Italian Pilot Project for Healthy Urban Air

    Directory of Open Access Journals (Sweden)

    Alessandro Avveduto

    2017-11-01

    Full Text Available Exposure to atmospheric pollution is a major concern for urban populations. Currently, no effective strategy has been adopted to tackle the problem. The paper presents the Smart Clean Air City project, a pilot experiment concerning the improvement in urban air quality. Small wet scrubber systems will be operating in a network configuration in suitable urban areas of L’Aquila city (Italy. The purpose of this work is to describe the project and show the preliminary results obtained in the characterization of two urban sites before the remediation test; the main operating principles of the wet scrubber system will be discussed, as well as the design of the mobile treatment plant for the processing of wastewater resulting from scrubber operation. Measurements of particle size distributions in the range of 0.30–25 µm took place in the two sites of interest, an urban background and a traffic area in the city of L’Aquila. The mean number concentration detected was 2.4 × 107 and 4.5 × 107 particles/m3, respectively. Finally, theoretical assessments, performed by Computational Fluid Dynamics (CFD codes, will show the effects of the wet scrubber operation on air pollutants under different environmental conditions and in several urban usage patterns.

  1. Science, Right and Communication of Risk in L'Aquila trial

    Science.gov (United States)

    Altamura, Marco; Miozzo, Davide; Boni, Giorgio; Amato, Davide; Ferraris, Luca; Siccardi, Franco

    2013-04-01

    CIMA Research Foundation has had access to all the information of the criminal trial held in l'Aquila intended against some of the members of the Commissione Nazionale Grandi Rischi (National Commission for Forecasting and Preventing Major Risks) and some directors of the Italian Civil Protection Department. These information constitute the base of a study that has examined: - the initiation of investigations by the families of the victims; - the public prosecutor's indictment; - the testimonies; - the liaison between experts in seismology social scientists and communication; - the statement of the defence; - the first instance decision of condemnation. The study reveals the paramount importance of communication of risk as element of prevention. Taken into particular account is the method of the Judicial Authority ex-post control on the evaluations and decisions of persons with a role of decision maker within the Civil Protection system. In the judgment just published by the Court of l'Aquila, the reassuring information from scientists and operators of Civil Protection appears to be considered as a negative value.

  2. Burnout among healthcare workers at L'Aquila: its prevalence and associated factors.

    Science.gov (United States)

    Mattei, Antonella; Fiasca, Fabiana; Mazzei, Mariachiara; Abbossida, Vincenzo; Bianchini, Valeria

    2017-12-01

    Burnout, which is now recognized as a real problem in terms of its negative outcome on healthcare efficiency, is a stress condition that can be increased by exposure to natural disasters, such as the 2009 L'Aquila earthquake. This study aims to evaluate burnout syndrome, its associated risk factors and stress levels, and the individual coping strategies among healthcare professionals at L'Aquila General Hospital. A cross-sectional study of 190 healthcare workers was conducted. There was a questionnaire for the collection of the socio-demographic, occupational and anamnestic data, and the Maslach Burnout Inventory, the General Health Questionnaire-12 items (GHQ-12) and the Brief COPE were used. The burnout dimensions showed high scores in Emotional Exhaustion (38.95%), in Depersonalization (23.68%) and in lack of Personal Accomplishment (23.16%), along with the presence of moderate to high levels of distress (54.21%). In addition to factors already known to be associated with burnout (job perception and high levels of distress) exposure to an earthquake emerged as a factor independently associated with the syndrome. Adaptive coping strategies such as religiosity showed a significant and negative relationship with burnout. Our research highlights the need for interventions directed at a reduction in workload and work-stressors and an improvement of adaptive coping strategies, especially in a post-disaster workplace.

  3. Exploring the η Aquila System: Another Cepheid Parallax and Further Evidence for a Tertiary

    Science.gov (United States)

    Benedict, George Frederick; Barnes, Thomas G.; Evans, Nancy; Cochran, William; McArthur, Barbara E.; Harrison, Thomas E.

    2018-01-01

    We report progress towards a re-analysis of Hubble Space Telescope Fine Guidance Sensor astrometric data, originally acquired to determine a parallax for and absolute magnitudes of the classical Cepheid, η Aquila. This object was not included in past Cepheid Period-Luminosity Relation (PLR) work (Benedict et al. 2007, AJ, 133, 1810), because we had an insufficient number of epochs with which to establish a suspected and complicating companion orbit. Our new investigation is considerably aided by including a significant number of radial velocity measures (RV) from six sources, including new, high-quality Hobby-Eberly Telescope spectra. We first derive a 12 Fourier coefficient description of the Cepheid pulsation, solving for velocity offsets required to bring the six RV data sets into coincidence. We next model the RV residuals to that fit with an orbit. The resulting orbit has very high eccentricity. The astrometric residuals show only a very small perturbation, consistent with a prediction from the spectroscopic orbit. We finally include that orbit in a combined astrometry and radial velocity model. This modeling, similar to that presented in Benedict and Harrison (2017, AJ, 153, 258) yields a parallax, allowing inclusion of η Aquila in a PLR. It also establishes a Cepheid/companion mass ratio for the early-type star companion identified in IUE spectra (Evans 1991, ApJ, 372, 597).

  4. Blog, social network e strategie narrative di resistenza nel post-terremoto dell’Aquila

    Directory of Open Access Journals (Sweden)

    Massimo Giuliani

    2013-06-01

    Full Text Available An earthquake, when it attacks a territory and its history, strikes the collective memory of a population. The reconstruction that followed the earthquake of L'Aquila contained elements that were equally problematic for the cohesion of the social and relational network, and therefore for mental health of stricken citizens and territories. Blogs, Facebook and online communication have been a possibility for the population (not only for the age group that usually makes use of digital technologies to share informations, chronicles, narrations about the postearthquake and to tell a story that was different from the sweetened one told by media. While trying a way to process loss and their own discomfort, many authors and bloggers became datum points for a community of readers thanks to their stories and chronicles. In addition, for the active citizenry the virtual setting of the Web became a substitute to the physical agora that had been wiped out by the earthquake and the dispersion of the community all over national territory. The experience of L'Aquila, also recreated through conversations with bloggers and the observation of Facebook status updates, shows interesting elements in order to understand the role of new media in everyday life and in an exceptional context such as collective trauma

  5. Geosphere coupling and hydrothermal anomalies before the 2009 Mw 6.3 L'Aquila earthquake in Italy

    Directory of Open Access Journals (Sweden)

    L. Wu

    2016-08-01

    LCAC mode was proposed to interpret the possible mechanisms of the multiple quasi-synchronous anomalies preceding the L'Aquila earthquake. Results indicate that CO2-rich fluids in deep crust might have played a significant role in the local LCAC process.

  6. Adaptive Response of Children and Adolescents with Autism to the 2009 Earthquake in L'Aquila, Italy

    Science.gov (United States)

    Valenti, Marco; Ciprietti, Tiziana; Di Egidio, Claudia; Gabrielli, Maura; Masedu, Francesco; Tomassini, Anna Rita; Sorge, Germana

    2012-01-01

    The literature offers no descriptions of the adaptive outcomes of people with autism spectrum disorder (ASD) after natural disasters. Aim of this study was to evaluate the adaptive behaviour of participants with ASD followed for 1 year after their exposure to the 2009 earthquake in L'Aquila (Italy) compared with an unexposed peer group with ASD,…

  7. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  8. Un San Sebastiano di Silvestro dell’Aquila e un San Vito di Saturnino Gatti / A St. Sebastian by Silvestro dell’Aquila and a St. Vitus by Saturnino Gatti

    Directory of Open Access Journals (Sweden)

    Lorenzo Principi

    2015-06-01

    Full Text Available L’articolo si focalizza sull’attribuzione di un’inedita statua a Silvestro di Giacomo da Sulmona, meglio noto come Silvestro dell’Aquila (documentato dal 1471-1504 e un’altra a Saturnino Gatti (1463 circa-1518, protagonisti della scultura del Rinascimento in Abruzzo. La prima proposta riguarda un San Sebastiano ligneo, grande poco meno del vero, conservato nella chiesa di Santa Maria ad Nives di Rocca di Mezzo, principale centro dell’Altipiano delle Rocche e paese natale del celebre cardinale Amico Agnifili, committente di Silvestro di Giacomo. La seconda acquisizione concerne una scultura lignea grande al naturale raffigurante San Vito, rintracciata nell’omonima chiesa di Colle San Vito, nel comune di Tornimparte, situata a pochi passi dagli affreschi eseguiti da Saturnino Gatti tra il 1490 e il 1494 in San Panfilo a Villagrande. Grazie ad un’analisi dei diversi contesti in cui si generarono le sculture e soprattutto attraverso stringenti confronti con opere note del catalogo dei due artisti si può riferire la prima statua alla tarda produzione di Silvestro dell’Aquila e la seconda al periodo di maturità di Saturnino Gatti.   The article focuses on the attribution of two unpublished wooden statues respectively realized by two masters of Renaissance sculpture in Abruzzo: Silvestro di Giacomo, known as Silvestro dell’Aquila (whose activity is documented at L’Aquila from 1471 to 1504; and Saturnino Gatti. The scultpure attributed to Silvestro dell’Aquila portrays Saint Sebastian, and is of almost life-size dimensions. It was spotted out inside the church of Santa Maria ad Nives at Rocca di Mezzo, the most renowned village on the upland of «Le Rocche», in the nearbies of L’Aquila; it was the birthplace of Cardinal Amico Agnifili, who happened to be Silvestro’s patron. The second statue, by Saturnino Gatti, represents Saint Vitus and is hold in the homonymous church at Colle San Vito in the municipal district of

  9. Usefulness of the Monte Carlo method in reliability calculations

    International Nuclear Information System (INIS)

    Lanore, J.M.; Kalli, H.

    1977-01-01

    Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels

  10. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  11. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

  12. Geoethical implications in the L'Aquila case: scientific knowledge and communication

    Science.gov (United States)

    Di Capua, Giuseppe

    2013-04-01

    On October 22nd 2012, three and a half years after the earthquake that destroyed the city of L'Aquila (central Italy), killing more than 300 people and wounding about 1,500, a landmark judgment for the scientific research established the condemnation of six members of the Major Risks Committee of the Italian Government and a researcher of INGV (Istituto Nazionale di Geofisica e Vulcanologia), called to provide information about the evolution of the seismic sequence. The judge held that these Geoscientists were negligent during the meeting of 31st March 2009, convened to discuss the scientific aspects of the seismic risk of this area, affected by a long seismic sequence, also in the light of repeated warnings about the imminence of a strong earthquake, on the base of measurements of radon gas by an Italian independent technician, transmitted to the population by mass-media. Without going into the legal aspects of the criminal proceedings, this judgment strikes for the hardness of the condemnation to be paid by the scientists (six years of imprisonment, perpetual disqualification from public office and legal disqualification during the execution of the penalty, compensation for victims up to several hundred thousands of Euros). Some of them are scientists known worldwide for their proven skills, professionalism and experience. In conclusion, these scientists were found guilty of having contributed to the death of many people, because they have not communicated in an appropriate manner all available information on the seismic hazard and vulnerability of the area of L'Aquila. This judgment represents a watershed in the way of looking at the social role of geoscientists in the defense against natural hazards and their responsibility towards the people. But, in what does this responsibility consist of? It consists of the commitment to conduct an updated and reliable scientific research, which provides for a detailed analysis of the epistemic uncertainty for a more

  13. SEISMIC SITE RESPONSE ESTIMATION IN THE NEAR SOURCE REGION OF THE 2009 L’AQUILA, ITALY, EARTHQUAKE

    Science.gov (United States)

    Bertrand, E.; Azzara, R.; Bergamashi, F.; Bordoni, P.; Cara, F.; Cogliano, R.; Cultrera, G.; di Giulio, G.; Duval, A.; Fodarella, A.; Milana, G.; Pucillo, S.; Régnier, J.; Riccio, G.; Salichon, J.

    2009-12-01

    The 6th of April 2009, at 3:32 local time, a Mw 6.3 earthquake hit the Abruzzo region (central Italy) causing more than 300 casualties. The epicenter of the earthquake was 95km NE of Rome and 10km from the center of the city of L’Aquila, the administrative capital of the Abruzzo region. This city has a population of about 70,000 and was severely damaged by the earthquake, the total cost of the buildings damage being estimated around 3 Bn €. Historical masonry buildings particularly suffered from the seismic shaking, but some reinforced concrete structures from more modern construction were also heavily damaged. To better estimate the seismic solicitation of these structures during the earthquake, we deployed temporary arrays in the near source region. Downtown L’Aquila, as well as a rural quarter composed of ancient dwelling-centers located western L’Aquila (Roio area), have been instrumented. The array set up downtown consisted of nearly 25 stations including velocimetric and accelerometric sensors. In the Roio area, 6 stations operated for almost one month. The data has been processed in order to study the spectral ratios of the horizontal component of ground motion at the soil site and at a reference site, as well as the spectral ratio of the horizontal and the vertical movement at a single recording site. Downtown L’Aquila is set on a Quaternary fluvial terrace (breccias with limestone boulders and clasts in a marly matrix), which forms the left bank of the Aterno River and slopes down in the southwest direction towards the Aterno River. The alluvial are lying on lacustrine sediments reaching their maximum thickness (about 250m) in the center of L’Aquila. After De Luca et al. (2005), these quaternary deposits seem to lead in an important amplification factor in the low frequency range (0.5-0.6 Hz). However, the level of amplification varies strongly from one point to the other in the center of the city. This new experimentation allows new and more

  14. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  15. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  16. On the morphology of outbursts of accreting millisecond X-ray pulsar Aquila X-1

    Science.gov (United States)

    Güngör, C.; Ekşi, K. Y.; Göğüş, E.

    2017-10-01

    We present the X-ray light curves of the last two outbursts - 2014 & 2016 - of the well known accreting millisecond X-ray pulsar (AMXP) Aquila X-1 using the monitor of all sky X-ray image (MAXI) observations in the 2-20 keV band. After calibrating the MAXI count rates to the all-sky monitor (ASM) level, we report that the 2016 outburst is the most energetic event of Aql X-1, ever observed from this source. We show that 2016 outburst is a member of the long-high class according to the classification presented by Güngör et al. with ˜ 68 cnt/s maximum flux and ˜ 60 days duration time and the previous outburst, 2014, belongs to the short-low class with ˜ 25 cnt/s maximum flux and ˜ 30 days duration time. In order to understand differences between outbursts, we investigate the possible dependence of the peak intensity to the quiescent duration leading to the outburst and find that the outbursts following longer quiescent episodes tend to reach higher peak energetic.

  17. STATUS BAKU MUTU AIR LAUT PERAIRAN TELUK AMBON LUAR UNTUK WISATA BAHARI KAPAL TENGGELAM SS AQUILA

    Directory of Open Access Journals (Sweden)

    Guntur Adhi Rahmawan

    2017-09-01

    Full Text Available Ambon Bay waters consist of two parts, Inner Ambon Bay and Outer Ambon Bay separated by a gap that is narrow and shallow. Ambon Bay has a lot of functionality and usability both in transportation, conservation, and tourism. The existence of one of the sites SS. Aquila sinking ship that sank since May 27, 1958, became one of the tourist attraction diving in Ambon Bay. Determination of water pollution index Ambon Bay becomes very important to do as support material and development of sea travel. Determining pollution index is done by direct measurement using the sea water quality parameters Water Quality Checker (DKK TOA WQC Type-24, as well as laboratory analysis to determine the chemical parameters of seawater (pH, TSS, salinity, turbidity, oil, grease. The results showed that the waters of the Bay of Ambon Affairs based on some parameters water quality standard for marine tourism is still included in accordance with the standard criteria by Keputusan Menteri Negara Lingkungan Hidup Nomor: 51 Tahun 2004 on Guidelines for Determination of Water Quality Status.

  18. Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)

    International Nuclear Information System (INIS)

    Fonti, Roberta; Barthel, Rainer; Formisano, Antonio; Borri, Antonio; Candela, Michele

    2015-01-01

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed

  19. Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)

    Energy Technology Data Exchange (ETDEWEB)

    Fonti, Roberta, E-mail: roberta.fonti@tum.de; Barthel, Rainer, E-mail: r.barthel@lrz.tu-muenchen.de [TUM University, Chair of Structural Design, Arcisstraße 21, 80333 Munich (Germany); Formisano, Antonio, E-mail: antoform@unina.it [University of Naples “Federico II”, DIST Department, P.le V. Tecchio, 80, 80125 Naples (Italy); Borri, Antonio, E-mail: antonio.borri@unipg.it [University of Perugia, Department of Engineering, Via G. Duranti 95, 06125 Perugia (Italy); Candela, Michele, E-mail: ing.mcandela@libero.it [University of Reggio Calabria, PAU Department, Salita Melissari 1, 89124 Reggio Calabria (Italy)

    2015-12-31

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed.

  20. Multiscale Documentation and Monitoring of L'aquila Historical Centre Using Uav Photogrammetry

    Science.gov (United States)

    Dominici, D.; Alicandro, M.; Rosciano, E.; Massimi, V.

    2017-05-01

    Nowadays geomatic techniques can guarantee not only a precise and accurate survey for the documentation of our historical heritage but also a solution to monitor its behaviour over time after, for example, a catastrophic event (earthquakes, landslides, ecc). Europe is trying to move towards harmonized actions to store information on cultural heritage (MIBAC with the ICCS forms, English heritage with the MIDAS scheme, etc) but it would be important to provide standardized methods in order to perform measuring operations to collect certified metric data. The final result could be a database to support the entire management of the cultural heritage and also a checklist of "what to do" and "when to do it". The wide range of geomatic techniques provides many solutions to acquire, to organize and to manage data at a multiscale level: high resolution satellite images can provide information in a short time during the "early emergency" while UAV photogrammetry and laser scanning can provide digital high resolution 3D models of buildings, ortophotos of roofs and facades and so on. This paper presents some multiscale survey case studies using UAV photogrammetry: from a minor historical village (Aielli) to the centre of L'Aquila (Santa Maria di Collemaggio Church) from the post-emergency to now. This choice has been taken not only to present how geomatics is an effective science for modelling but also to present a complete and reliable way to perform conservation and/or restoration through precise monitoring techniques, as shown in the third case study.

  1. The genome sequence of a widespread apex predator, the golden eagle (Aquila chrysaetos.

    Directory of Open Access Journals (Sweden)

    Jacqueline M Doyle

    Full Text Available Biologists routinely use molecular markers to identify conservation units, to quantify genetic connectivity, to estimate population sizes, and to identify targets of selection. Many imperiled eagle populations require such efforts and would benefit from enhanced genomic resources. We sequenced, assembled, and annotated the first eagle genome using DNA from a male golden eagle (Aquila chrysaetos captured in western North America. We constructed genomic libraries that were sequenced using Illumina technology and assembled the high-quality data to a depth of ∼40x coverage. The genome assembly includes 2,552 scaffolds >10 Kb and 415 scaffolds >1.2 Mb. We annotated 16,571 genes that are involved in myriad biological processes, including such disparate traits as beak formation and color vision. We also identified repetitive regions spanning 92 Mb (∼6% of the assembly, including LINES, SINES, LTR-RTs and DNA transposons. The mitochondrial genome encompasses 17,332 bp and is ∼91% identical to the Mountain Hawk-Eagle (Nisaetus nipalensis. Finally, the data reveal that several anonymous microsatellites commonly used for population studies are embedded within protein-coding genes and thus may not have evolved in a neutral fashion. Because the genome sequence includes ∼800,000 novel polymorphisms, markers can now be chosen based on their proximity to functional genes involved in migration, carnivory, and other biological processes.

  2. Rubble masonry response under cyclic actions: The experience of L'Aquila city (Italy)

    Science.gov (United States)

    Fonti, Roberta; Barthel, Rainer; Formisano, Antonio; Borri, Antonio; Candela, Michele

    2015-12-01

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local response of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different "modes of damage" of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L'Aquila district is discussed.

  3. Movements and landscape use of Eastern Imperial Eagles Aquila heliaca in Central Asia

    Science.gov (United States)

    Poessel, Sharon; Bragin, Evgeny A.; Sharpe, Peter B.; Garcelon, David K.; Bartoszuk, Kordian; Katzner, Todd E.

    2018-01-01

    Capsule: We describe ecological factors associated with movements of a globally declining raptor species, the Eastern Imperial Eagle Aquila heliaca.Aims: To describe the movements, habitat associations and resource selection of Eastern Imperial Eagles marked in Central Asia.Methods: We used global positioning system (GPS) data sent via satellite telemetry devices deployed on Eastern Imperial Eagles captured in Kazakhstan to calculate distances travelled and to associate habitat and weather variables with eagle locations collected throughout the annual cycle. We also used resource selection models to evaluate habitat use of tracked birds during autumn migration. Separately, we used wing-tagging recovery data to broaden our understanding of wintering locations of eagles.Results: Eagles tagged in Kazakhstan wintered in most countries on the Arabian Peninsula, as well as Iran and India. The adult eagle we tracked travelled more efficiently than did the four pre-adults. During autumn migration, telemetered eagles used a mixture of vegetation types, but during winter and summer, they primarily used bare and sparsely vegetated areas. Finally, telemetered birds used orographic updrafts to subsidize their autumn migration flight, but they relied on thermal updrafts during spring migration.Conclusion: Our study is the first to use GPS telemetry to describe year-round movements and habitat associations of Eastern Imperial Eagles in Central Asia. Our findings provide insight into the ecology of this vulnerable raptor species that can contribute to conservation efforts on its behalf.

  4. Ileo-ceco-rectal Intussusception Requiring Intestinal Resection and Anastomosis in a Tawny Eagle (Aquila rapax).

    Science.gov (United States)

    Sabater, Mikel; Huynh, Minh; Forbes, Neil

    2015-03-01

    A 23-year-old male tawny eagle (Aquila rapax) was examined because of sudden onset of lethargy, regurgitation, and hematochezia. An intestinal obstruction was suspected based on radiographic findings, and an ileo-ceco-rectal intussusception was confirmed by coelioscopy. A 14.3-cm section of intestine was resected before an intestinal anastomosis was done. Coelomic endoscopic examination confirmed a postsurgical complication of adhesions between the intestinal anastomosis and the dorsal coelomic wall, resulting in a partial luminal stricture and requiring surgical removal of the adhesions. Rectoscopy was useful in diagnosing a mild luminal stricture related to the second surgery. Complete recovery was observed 2 months after surgery. Lack of further complications in the 2 years after surgery demonstrates good tolerance of intestinal resection and anastomosis of a large segment of bowel in an eagle. This report is the first reported case of intussusception in an eagle and emphasizes the potential use of endoscopic examination in the diagnosis as well as in the management of complications.

  5. Predators as prey at a Golden Eagle Aquila chrysaetos eyrie in Mongolia

    Science.gov (United States)

    Ellis, D.H.; Tsengeg, Pu; Whitlock, P.; Ellis, Merlin H.

    2000-01-01

    Although golden eagles (Aquila chrysaetos) have for decades been known to occasionally take large or dangerous quarry, the capturing of such was generally believed to be rare and/or the act of starved birds. This report provides details of an exceptional diet at a golden eagle eyrie in eastern Mongolia with unquantified notes on the occurrence of foxes at other eyries in Mongolia. Most of the prey we recorded were unusual, including 1 raven (Corvus corax), 3 demoiselle cranes (Anthropoides virgo), 1 upland buzzard (Buteo hemilasius), 3 owls, 27 foxes, and 11 Mongolian gazelles. Some numerical comparisons are of interest. Our value for gazelle calves (10 minimum count, 1997) represents 13% of 78 prey items and at least one adult was also present. Our total of only 15 hares (Lepus tolai) and 4 marmots (Marmota sibirica) compared to 27 foxes suggests not so much a preference for foxes, but rather that populations of more normal prey were probably depressed at this site. Unusual prey represented 65% of the diet at this eyrie.

  6. The 6 April 2009 earthquake at L'Aquila: a preliminary analysis of magnetic field measurements

    Directory of Open Access Journals (Sweden)

    U. Villante

    2010-02-01

    Full Text Available Several investigations reported the possible identification of anomalous geomagnetic field signals prior to earthquake occurrence. In the ULF frequency range, candidates for precursory signatures have been proposed in the increase in the noise background and polarization parameter (i.e. the ratio between the amplitude/power of the vertical component and that one of the horizontal component, in the changing characteristics of the slope of the power spectrum and fractal dimension, in the possible occurrence of short duration pulses. We conducted, with conventional techniques of data processing, a preliminary analysis of the magnetic field observations performed at L'Aquila during three months preceding the 6 April 2009 earthquake, focusing attention on the possible occurrence of features similar to those identified in previous events. Within the limits of this analysis, we do not find compelling evidence for any of the features which have been proposed as earthquake precursors: indeed, most of aspects of our observations (which, in some cases, appear consistent with previous findings might be interpreted in terms of the general magnetospheric conditions and/or of different sources.

  7. Damage and recovery of historic buildings: The experience of L’Aquila

    International Nuclear Information System (INIS)

    Modena, Claudio; Valluzzi, Maria Rosa; Da Porto, Franca; Munari, Marco

    2015-01-01

    Problems range from the same definition and choice of the “conventional” safety level, to the methodologies that can be used to perform reliable structural analyses and safety verifications (as modern ones are frequently not suitable for the construction under consideration) and to the selection, design and execution of appropriate materials and interventions techniques aimed to repair and strengthen the built heritage while preserving its cultural, historic, artistic values. The earthquake that struck the Abruzzo region on 6. April 2009 at 3:32 a.m., had its epicentre in the capital of the region, L’Aquila, and seriously affected a wide area around the city, where many historic towns and villages are found. Lessons learned from this event gave relevant contributions to develop specific tools, to appropriately tackle the above mentioned problems, available to practitioner engineers and architects: methodology to intervene on complex and connected buildings in the historic centres, definition of adequate materials and techniques to intervene on the damaged buildings, codes and codes of practice specific for historic constructions. A short review of all the mentioned aspects is presented in the paper, making specific reference to research activities, practical applications and to the recent evolution of codes and guidelines [it

  8. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  9. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  10. Iowa Bedrock Faults

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This fault coverage locates and identifies all currently known/interpreted fault zones in Iowa, that demonstrate offset of geologic units in exposure or subsurface...

  11. Layered Fault Management Architecture

    National Research Council Canada - National Science Library

    Sztipanovits, Janos

    2004-01-01

    ... UAVs or Organic Air Vehicles. The approach of this effort was to analyze fault management requirements of formation flight for fleets of UAVs, and develop a layered fault management architecture which demonstrates significant...

  12. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  13. [Medium- and long-term health effects of the L'Aquila earthquake (Central Italy, 2009) and of other earthquakes in high-income Countries: a systematic review].

    Science.gov (United States)

    Ripoll Gallardo, Alba; Alesina, Marta; Pacelli, Barbara; Serrone, Dario; Iacutone, Giovanni; Faggiano, Fabrizio; Della Corte, Francesco; Allara, Elias

    2016-01-01

    to compare the methodological characteristics of the studies investigating the middle- and long-term health effects of the L'Aquila earthquake with the features of studies conducted after other earthquakes occurred in highincome Countries. a systematic comparison between the studies which evaluated the health effects of the L'Aquila earthquake (Central Italy, 6th April 2009) and those conducted after other earthquakes occurred in comparable settings. Medline, Scopus, and 6 sources of grey literature were systematically searched. Inclusion criteria comprised measurement of health outcomes at least one month after the earthquake, investigation of earthquakes occurred in high-income Countries, and presence of at least one temporal or geographical control group. out of 2,976 titles, 13 studies regarding the L'Aquila earthquake and 51 studies concerning other earthquakes were included. The L'Aquila and the Kobe/Hanshin- Awaji (Japan, 17th January 1995) earthquakes were the most investigated. Studies on the L'Aquila earthquake had a median sample size of 1,240 subjects, a median duration of 24 months, and used most frequently a cross sectional design (7/13). Studies on other earthquakes had a median sample size of 320 subjects, a median duration of 15 months, and used most frequently a time series design (19/51). the L'Aquila studies often focussed on mental health, while the earthquake effects on mortality, cardiovascular outcomes, and health systems were less frequently evaluated. A more intensive use of routine data could benefit future epidemiological surveillance in the aftermath of earthquakes.

  14. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  15. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  16. Performance based fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2002-01-01

    Different aspects of fault detection and fault isolation in closed-loop systems are considered. It is shown that using the standard setup known from feedback control, it is possible to formulate fault diagnosis problems based on a performance index in this general standard setup. It is also shown...

  17. Modeling Late-Summer Distribution of Golden Eagles (Aquila chrysaetos) in the Western United States.

    Science.gov (United States)

    Nielson, Ryan M; Murphy, Robert K; Millsap, Brian A; Howe, William H; Gardner, Grant

    2016-01-01

    Increasing development across the western United States (USA) elevates concerns about effects on wildlife resources; the golden eagle (Aquila chrysaetos) is of special concern in this regard. Knowledge of golden eagle abundance and distribution across the western USA must be improved to help identify and conserve areas of major importance to the species. We used distance sampling and visual mark-recapture procedures to estimate golden eagle abundance from aerial line-transect surveys conducted across four Bird Conservation Regions in the western USA between 15 August and 15 September in 2006-2010, 2012, and 2013. To assess golden eagle-habitat relationships at this scale, we modeled counts of golden eagles seen during surveys in 2006-2010, adjusted for probability of detection, and used land cover and other environmental factors as predictor variables within 20-km2 sampling units randomly selected from survey transects. We found evidence of positive relationships between intensity of use by golden eagles and elevation, solar radiation, and mean wind speed, and of negative relationships with the proportion of landscape classified as forest or as developed. The model accurately predicted habitat use observed during surveys conducted in 2012 and 2013. We used the model to construct a map predicting intensity of use by golden eagles during late summer across our ~2 million-km2 study area. The map can be used to help prioritize landscapes for conservation efforts, identify areas where mitigation efforts may be most effective, and identify regions for additional research and monitoring. In addition, our map can be used to develop region-specific (e.g., state-level) density estimates based on the latest information on golden eagle abundance from a late-summer survey and aid designation of geographic management units for the species.

  18. Suicidal intention and negative spiritual coping one year after the earthquake of L'Aquila (Italy).

    Science.gov (United States)

    Stratta, Paolo; Capanna, Cristina; Riccardi, Ilaria; Carmassi, Claudia; Piccinni, Armando; Dell'Osso, Liliana; Rossi, Alessandro

    2012-02-01

    This study investigated the rate of suicidal intention and its relationship with the features of religious involvement in a non-clinical sample of the adult population exposed to the L'Aquila earthquake. The study population was composed of 426 people who had experienced the earthquake (188 males and 238 females). For comparison, 522 people were recruited from nearby unaffected areas. The sample was investigated for suicidal intention screening, distinguishing Suicidal Screen-Negative (SSN) subjects from Positive (SSP) subjects. Brief Multidimensional Measure of Religiousness/Spirituality (BMMRS) and Impact of Event Scale (IES) assessments were administered. More SSP subjects were observed in the population exposed to the earthquake (Odds Ratio 3.54). A higher proportion of females showed suicidal ideation. Multivariate analysis showed overall significance for the between-subject factor. Univariate F tests for each BMMRS variable that contributed to significant overall effect showed that negative spiritual coping was significantly different. No differences were observed for IES scores between the two groups, but correlations with negative spiritual coping were found. The samples are relatively small and data are based on self-reports. Negative religious coping such as expression of conflict and doubt regarding matters of faith, as well as a feeling of being punished or abandoned by God, can prevail in response to prolonged stress without relief, as was experienced by the population exposed to the earthquake. These features are more associated with suicide ideation. Degree of religious affiliation and commitment examination by mental health practitioners can be useful when suicidal ideation is investigated. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. MULTISCALE DOCUMENTATION AND MONITORING OF L’AQUILA HISTORICAL CENTRE USING UAV PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    D. Dominici

    2017-05-01

    Full Text Available Nowadays geomatic techniques can guarantee not only a precise and accurate survey for the documentation of our historical heritage but also a solution to monitor its behaviour over time after, for example, a catastrophic event (earthquakes, landslides, ecc. Europe is trying to move towards harmonized actions to store information on cultural heritage (MIBAC with the ICCS forms, English heritage with the MIDAS scheme, etc but it would be important to provide standardized methods in order to perform measuring operations to collect certified metric data. The final result could be a database to support the entire management of the cultural heritage and also a checklist of “what to do” and “when to do it”. The wide range of geomatic techniques provides many solutions to acquire, to organize and to manage data at a multiscale level: high resolution satellite images can provide information in a short time during the “early emergency” while UAV photogrammetry and laser scanning can provide digital high resolution 3D models of buildings, ortophotos of roofs and facades and so on. This paper presents some multiscale survey case studies using UAV photogrammetry: from a minor historical village (Aielli to the centre of L’Aquila (Santa Maria di Collemaggio Church from the post-emergency to now. This choice has been taken not only to present how geomatics is an effective science for modelling but also to present a complete and reliable way to perform conservation and/or restoration through precise monitoring techniques, as shown in the third case study.

  20. A new Wolf-Rayet star and its circumstellar nebula in Aquila

    Science.gov (United States)

    Gvaramadze, V. V.; Kniazev, A. Y.; Hamann, W.-R.; Berdnikov, L. N.; Fabrika, S.; Valeev, A. F.

    2010-04-01

    We report the discovery of a new Wolf-Rayet star in Aquila via detection of its circumstellar nebula (reminiscent of ring nebulae associated with late WN stars) using the Spitzer Space Telescope archival data. Our spectroscopic follow-up of the central point source associated with the nebula showed that it is a WN7h star (we named it WR121b). We analysed the spectrum of WR121b by using the Potsdam Wolf-Rayet model atmospheres, obtaining a stellar temperature of ~=50kK. The stellar wind composition is dominated by helium with ~20 per cent of hydrogen. The stellar spectrum is highly reddened [E(B - V) = 2.85mag]. Adopting an absolute magnitude of Mv = -5.7, the star has a luminosity of logL/Lsolar = 5.75 and a mass-loss rate of 10-4.7Msolaryr-1, and resides at a distance of 6.3kpc. We searched for a possible parent cluster of WR121b and found that this star is located at ~=1° from the young star cluster embedded in the giant HII region W43 (containing a WN7+a/OB? star - WR121a). We also discovered a bow shock around the O9.5III star ALS9956, located at from the cluster. We discuss the possibility that WR121b and ALS9956 are runaway stars ejected from the cluster in W43. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC). E-mail: vgvaram@mx.iki.rssi.ru (VVG); akniazev@saao.ac.za (AYK); wrh@astro.physik.uni-potsdam.de (WRH); berdnik@sai.msu.ru (LNB); fabrika@sao.ru (SF); azamat@sao.ru (AFV)

  1. "3D_Fault_Offsets," a Matlab Code to Automatically Measure Lateral and Vertical Fault Offsets in Topographic Data: Application to San Andreas, Owens Valley, and Hope Faults

    Science.gov (United States)

    Stewart, N.; Gaudemer, Y.; Manighetti, I.; Serreau, L.; Vincendeau, A.; Dominguez, S.; Mattéo, L.; Malavieille, J.

    2018-01-01

    Measuring fault offsets preserved at the ground surface is of primary importance to recover earthquake and long-term slip distributions and understand fault mechanics. The recent explosion of high-resolution topographic data, such as Lidar and photogrammetric digital elevation models, offers an unprecedented opportunity to measure dense collections of fault offsets. We have developed a new Matlab code, 3D_Fault_Offsets, to automate these measurements. In topographic data, 3D_Fault_Offsets mathematically identifies and represents nine of the most prominent geometric characteristics of common sublinear markers along faults (especially strike slip) in 3-D, such as the streambed (minimum elevation), top, free face and base of channel banks or scarps (minimum Laplacian, maximum gradient, and maximum Laplacian), and ridges (maximum elevation). By calculating best fit lines through the nine point clouds on either side of the fault, the code computes the lateral and vertical offsets between the piercing points of these lines onto the fault plane, providing nine lateral and nine vertical offset measures per marker. Through a Monte Carlo approach, the code calculates the total uncertainty on each offset. It then provides tools to statistically analyze the dense collection of measures and to reconstruct the prefaulted marker geometry in the horizontal and vertical planes. We applied 3D_Fault_Offsets to remeasure previously published offsets across 88 markers on the San Andreas, Owens Valley, and Hope faults. We obtained 5,454 lateral and vertical offset measures. These automatic measures compare well to prior ones, field and remote, while their rich record provides new insights on the preservation of fault displacements in the morphology.

  2. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  3. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  4. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  5. Application of seismic isolation for seismic strengthening of buildings damaged by the earthquake of L’Aquila

    International Nuclear Information System (INIS)

    Corsetti, Daniele

    2015-01-01

    The earthquake of 6 April 2009 destroyed the social and economic network fabric of the town of 'L'Aquila'. Since then, many buildings have been restored and some designers have taken the opportunity of rebuilding the town applying innovative technologies. In this context, despite the inevitable bureaucratic hurdles and economic constraints, added to the death of Mr. Mancinelli in 2012 (GLIS Member), several projects were carried out on existing buildings with the idea of applying base seismic isolation. A decade after the first application of this solution on an existing building in Fabriano by Mr. Mancinelli, the experience has proved to be a success, both in terms of achieved results and ease of management. For L’Aquila earthquake the idea was to replicate the positive experience of the “Marche earthquake”, though the problems and obstacles to face often were substantially different. The experience outlined below is a summary of the issues faced and resolved in two projects, taking into account that any solution can be further improved and refined depending on the ability and sensitivity of the designer. We have come to the conclusion that the projects of a base seismic isolation of existing buildings are 'tailor-made' projects, and that the solutions have to be analysed a case by case, even if the main concepts are simple and applicable to a wide range of buildings [it

  6. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA

    Directory of Open Access Journals (Sweden)

    Wonhee Lee

    2014-02-01

    Full Text Available A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT. Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU.

  7. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA

    Science.gov (United States)

    Lee, Wonhee; Park, Chan Gook

    2014-01-01

    A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA) with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT). Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU). PMID:24556675

  8. Smart City L’Aquila : An Application of the “Infostructure” Approach to Public Urban Mobility in a Post-Disaster Context

    NARCIS (Netherlands)

    Falco, E.; Malavolta, Ivano; Radzimski, Adam; Ruberto, Stefano; Iovino, Ludovico; Gallo, Francesco

    2017-01-01

    Ever since the earthquake of April 6, 2009 hit the city of L’Aquila, Italy, the city has been facing major challenges in terms of social, physical, and economic reconstruction. The system of public urban mobility, the bus network, is no exception with its old bus fleet, non-user-friendly

  9. Smart City L’Aquila : An Application of the “Infostructure” Approach to Public Urban Mobility in a Post-Disaster Context

    NARCIS (Netherlands)

    Falco, Enzo; Malavolta, Ivano; Radzimski, Adam; Ruberto, Stefano; Iovino, Ludovico; Gallo, Francesco

    2018-01-01

    Ever since the earthquake of April 6, 2009 hit the city of L’Aquila, Italy, the city has been facing major challenges in terms of social, physical, and economic reconstruction. The system of public urban mobility, the bus network, is no exception with its old bus fleet, non-user-friendly

  10. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  11. [Resilience, social relations, and pedagogic intervention five years after the earthquake occurred in L'Aquila (Central Italy) in 2009: an action-research in the primary schools].

    Science.gov (United States)

    Vaccarelli, Alessandro; Ciccozzi, Chiara; Fiorenza, Arianna

    2016-01-01

    the action-research "Outdoor training and citizenship between children from L'Aquila", carried out from 2014 to 2015 in some schools situated in the municipality of L'Aquila, aimed to answer to the needs emerged in reference to the social and psychological problems among children during the period after the L'Aquila earthquake occurred in 2009. In particular, the article provides documentary evidence about the results regarding the parts related to the study of resilience (cognitive objective) and of social relations (objective tied to the educational intervention), five years after the earthquake. the pedagogical research team, in close cooperation with the Cartography Laboratory of the University of L'Aquila and with the Grupo de Innovación Educativa Areté de la Universidad Politécnica de Madrid, has worked according to the research-action methodology, collecting secondary data and useful data to check the effectiveness of the educational actions put in place in order to promote resilient behaviours and to activate positive group dynamics. the study has been developed in 4 primary schools of the L'Aquila and has involved 83 children from 8 to 12 years. A control group made by 55 subjects, homogeneous for sex and age, has been identified in the primary schools of Borgorose, a little town near Rieti (Central Italy). data about the abilities of resilience and about the response to the stress have been collected in the first phase of the study with the purpose to outline the initial situation and develop an appropriate educational intervention. The comparison with the control group made by 55 subjects who were not from L'Aquila allowed to check that, 5 years after the disaster, the context of life produces a meaningful discrepancy in terms of responses to the stress and to the ability of resilience, and this fact is definitely negative for children from L'Aquila. On the other hand, data related to social relations allowed to verify how the educational intervention

  12. A longitudinal study of quality of life of earthquake survivors in L'Aquila, Italy.

    Science.gov (United States)

    Valenti, Marco; Masedu, Francesco; Mazza, Monica; Tiberti, Sergio; Di Giovanni, Chiara; Calvarese, Anna; Pirro, Roberta; Sconci, Vittorio

    2013-12-07

    People's well-being after loss resulting from an earthquake is a concern in countries prone to natural disasters. Most studies on post-earthquake subjective quality of life (QOL) have focused on the effects of psychological impairment and post-traumatic stress disorder (PTSD) on the psychological dimension of QOL. However, there is a need for studies focusing on QOL in populations not affected by PTSD or psychological impairment. The aim of this study was to estimate QOL changes over an 18-month period in an adult population sample after the L'Aquila 2009 earthquake. The study was designed as a longitudinal survey with four repeated measurements performed at six monthly intervals. The setting was the general population of an urban environment after a disruptive earthquake. Participants included 397 healthy adult subjects. Exclusion criteria were comorbidities such as physical, psychological, psychiatric or neurodegenerative diseases at the beginning of the study. The primary outcome measure was QOL, as assessed by the WHOQOL-BREF instrument. A generalised estimating equation model was run for each WHOQOL-BREF domain. Overall, QOL scores were observed to be significantly higher 18 months after the earthquake in all WHOQOL-BREF domains. The model detected an average increase in the physical QOL scores (from 66.6 ± 5.2 to 69.3 ± 4.7), indicating a better overall physical QOL for men. Psychological domain scores (from 64.9 ± 5.1 to 71.5 ± 6.5) were observed to be worse in men than in women. Levels at the WHOQOL domain for psychological health increased from the second assessment onwards in women, indicating higher resiliency. Men averaged higher scores than women in terms of social relationships and the environmental domain. Regarding the physical, psychological and social domains of QOL, scores in the elderly group (age > 60) were observed to be similar to each other regardless of the significant covariates used. WHOQOL-BREF scores of the psychological domain

  13. The L'Aquila process and the perils of bad communication of science

    Science.gov (United States)

    Alberti, Antonio

    2013-04-01

    Responsibilities and observance of ethical behaviour by scientists have increased more than ever with the advancement of science and of the social and economic development of a country. Nowadays, geoscientists are often charged by local and/or national and international authorities with the task of providing ways to foster economic development while protecting human life and safeguarding the environment. But besides technical and scientific expertise, in a democratic country all this requires efficient ways and various channels of scientific divulgation. Geoscientists themselves should be involved in these procedures, or at least they should be called to verify that correct communication is actually released. Unfortunately, it seems that awareness of such new and ever-increasing responsibilities is not yet being always realized at a needed level. The question is especially sensible in Italy, a country in which the hydro-geological, seismological, volcanological and coastal set-up requires careful technical and scientific treatment. Given the fragility of the natural system, the role of geoscientists should not be restricted to the delivery of scientific expertise: in fact, and perhaps more than elsewhere, problems are compounded by the need of communication based on sound science not only to governing authorities, but also to the public at large, possibly including also an array of mass media. Many international organizations have been wrongly interpreting the accusation and especially the sentence at the first stage of the L'Aquila process as a problem of impossibility to predict earthquakes. But the recently published motivation of the sentence seems to have brought to light the lack of a scrupulous overview of the situation prior to the disastrous seismic event, practically leaving the task of public information to the judgment or perception of the national agency in charge of natural hazards. It turned out that a major outcome of the process, apart from the

  14. Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro

    2015-01-01

    Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis

  15. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  16. Analytical propagation of uncertainties through fault trees

    International Nuclear Information System (INIS)

    Hauptmanns, Ulrich

    2002-01-01

    A method is presented which enables one to propagate uncertainties described by uniform probability density functions through fault trees. The approach is analytical. It is based on calculating the expected value and the variance of the top event probability. These two parameters are then equated with the corresponding ones of a beta-distribution. An example calculation comparing the analytically calculated beta-pdf (probability density function) with the top event pdf obtained using the Monte-Carlo method shows excellent agreement at a much lower expense of computing time

  17. Characterization of earthquake-induced ground motion from the L'Aquila seismic sequence of 2009, Italy

    Science.gov (United States)

    Malagnini, Luca; Akinci, Aybige; Mayeda, Kevin; Munafo', Irene; Herrmann, Robert B.; Mercuri, Alessia

    2011-01-01

    Based only on weak-motion data, we carried out a combined study on region-specific source scaling and crustal attenuation in the Central Apennines (Italy). Our goal was to obtain a reappraisal of the existing predictive relationships for the ground motion, and to test them against the strong-motion data [peak ground acceleration (PGA), peak ground velocity (PGV) and spectral acceleration (SA)] gathered during the Mw 6.15 L'Aquila earthquake (2009 April 6, 01:32 UTC). The L'Aquila main shock was not part of the predictive study, and the validation test was an extrapolation to one magnitude unit above the largest earthquake of the calibration data set. The regional attenuation was determined through a set of regressions on a data set of 12 777 high-quality, high-gain waveforms with excellent S/N ratios (4259 vertical and 8518 horizontal time histories). Seismograms were selected from the recordings of 170 foreshocks and aftershocks of the sequence (the complete set of all earthquakes with ML≥ 3.0, from 2008 October 1 to 2010 May 10). All waveforms were downloaded from the ISIDe web page (), a web site maintained by the Istituto Nazionale di Geofisica e Vulcanologia (INGV). Weak-motion data were used to obtain a moment tensor solution, as well as a coda-based moment-rate source spectrum, for each one of the 170 events of the L'Aquila sequence (2.8 ≤Mw≤ 6.15). Source spectra were used to verify the good agreement with the source scaling of the Colfiorito seismic sequence of 1997-1998 recently described by Malagnini (2008). Finally, results on source excitation and crustal attenuation were used to produce the absolute site terms for the 23 stations located within ˜80 km of the epicentral area. The complete set of spectral corrections (crustal attenuation and absolute site effects) was used to implement a fast and accurate tool for the automatic computation of moment magnitudes in the Central Apennines.

  18. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  19. Fault tree graphics

    International Nuclear Information System (INIS)

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  20. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  1. Characterization of leaky faults

    International Nuclear Information System (INIS)

    Shan, Chao.

    1990-05-01

    Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs

  2. Solar system fault detection

    Science.gov (United States)

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  3. Golden eagle (Aquila chrysaetos) habitat selection as a function of land use and terrain, San Diego County, California

    Science.gov (United States)

    Tracey, Jeff A.; Madden, Melanie C.; Bloom, Peter H.; Katzner, Todd E.; Fisher, Robert N.

    2018-04-16

    Beginning in 2014, the U.S. Geological Survey, in collaboration with Bloom Biological, Inc., began telemetry research on golden eagles (Aquila chrysaetos) captured in the San Diego, Orange, and western Riverside Counties of southern California. This work was supported by the San Diego Association of Governments, California Department of Fish and Wildlife, the U.S. Fish and Wildlife Service, the Bureau of Land Management, and the U.S. Geological Survey. Since 2014, we have tracked more than 40 eagles, although this report focuses only on San Diego County eagles.An important objective of this research is to develop habitat selection models for golden eagles. Here we provide predictions of population-level habitat selection for golden eagles in San Diego County based on environmental covariates related to land use and terrain.

  4. HBIM for restoration projects: case-study on San Cipriano Church in Castelvecchio Calvisio, Province of L’Aquila, Italy.

    Directory of Open Access Journals (Sweden)

    Romolo Continenza

    2016-06-01

    Full Text Available Although there have been significant developments in research into assigning semantic content to 3D models for the purposes of documentation, conservation and architectural and archaeological heritage management, the application of 3D GIS to individual artifacts has remained rare. Where 3D GIS has been used in this context, it has not been done in a consistent or standardised way.As an alternative to the elaborate construction of 3D GIS, the international academic community has embarked on a process of investigating how HBIM (Historical BIM might be used in the fields of historical architecture and archaeology.In this paper, we report on experiments carried out at the San Cipriano Church in Castelvecchio Calvisio, Province of L’Aquila, Italy, on the basis of the integrated survey of the church, before turning to a discussion of the planning of restoration work in a BIM environment.

  5. The environmental project of the enhancement of the fluvial area: L’Aquila and the Aterno River

    Directory of Open Access Journals (Sweden)

    Luciana Mastrolonardo

    2016-06-01

    Full Text Available Il contributo si colloca in un programma interdisciplinare volto alla valorizzazione e la tutela del fiume, attraverso il controllo dei flussi e l’attivazione di simbiosi. Si integrano, con un approccio che parte dalla progettazione ambientale, le tematiche urbane, paesaggistiche, tecnologiche ed ecologiche, per orientare lo sviluppo del territorio in termini di tutela e valorizzazione delle risorse, e per recuperare le discontinuità rappresentata oggi dal fiume, in alcuni contesti urbani, conferendole maggiore riconoscibilità e potenzialità. Nello specifico si indaga il rapporto tra L’Aquila e il fiume Aterno per individuare, a livello locale, le strategie perseguibili per il recupero delle connessioni tra l’ambito fluviale e urbano, per la valorizzazione del territorio e il ripristino della funzionalità dell’acqua nel suo ciclo vitale. 

  6. Biotelemetery data for golden eagles (Aquila chrysaetos) captured in coastal southern California, February 2016–February 2017

    Science.gov (United States)

    Tracey, Jeff A.; Madden, Melanie C.; Sebes, Jeremy B.; Bloom, Peter H.; Katzner, Todd E.; Fisher, Robert N.

    2017-05-12

    Because of a lack of clarity about the status of golden eagles (Aquila chrysaetos) in coastal southern California, the USGS, in collaboration with local, State, and other Federal agencies, began a multi-year survey and tracking program of golden eagles to address questions regarding habitat use, movement behavior, nest occupancy, genetic population structure, and human impacts on eagles. Golden eagle trapping and tracking efforts began in September 2014. During trapping efforts from September 29, 2014, to February 23, 2016, 27 golden eagles were captured. During trapping efforts from February 24, 2016, to February 23, 2017, an additional 10 golden eagles (7 females and 3 males) were captured in San Diego, Orange, and western Riverside Counties. Biotelemetry data for 26 of the 37 golden eagles that were transmitting data from February 24, 2016, to February 23, 2017 are presented. These eagles ranged as far north as northern Nevada and southern Wyoming, and as far south as La Paz, Baja California, Mexico.

  7. Limits on the potential accuracy of earthquake risk evaluations using the L’Aquila (Italy earthquake as an example

    Directory of Open Access Journals (Sweden)

    John Douglas

    2015-06-01

    Full Text Available This article is concerned with attempting to ‘predict’ (hindcast the damage caused by the L’Aquila 2009 earthquake (Mw 6.3 and, more generally, with the question of how close predicted damage can ever be to observations. Damage is hindcast using a well-established empirical-based approach based on vulnerability indices and macroseismic intensities, adjusted for local site effects. Using information that was available before the earthquake and assuming the same event characteristics as the L’Aquila mainshock, the overall damage is reasonably well predicted but there are considerable differences in the damage pattern. To understand the reasons for these differences, information that was only available after the event were include within the calculation. Despite some improvement in the predicted damage, in particularly by the modification of the vulnerability indices and the parameter influencing the width of the damage distribution, these hindcasts do not match all the details of the observations. This is because of local effects: both in terms of the ground shaking, which is only detectable by the installation of a much denser strong-motion network and a detailed microzonation, and in terms of the building vulnerability, which cannot be modeled using a statistical approach but would require detailed analytical modeling for which calibration data are likely to be lacking. Future studies should concentrate on adjusting the generic components of the approach to make them more applicable to their location of interest. To increase the number of observations available to make these adjustments, we encourage the collection of damage states (and not just habitability classes following earthquakes and also the installation of dense strong-motion networks in built-up areas.

  8. [Psychological distress and post-traumatic stress disorder (PTSD) in young survivors of L'Aquila earthquake].

    Science.gov (United States)

    Pollice, Rocco; Bianchini, Valeria; Roncone, Rita; Casacchia, Massimo

    2012-01-01

    The aim of the study is to evaluate the presence of PTSD diagnosis, psychological distress and post-traumatic symptoms in a population of young earthquake survivors after L'Aquila earthquake. Between April 2009 and January 2010, 187 young people seeking help consecutively at the Service for Monitoring and early Intervention against psychoLogical and mEntal suffering in young people (SMILE) of L'Aquila University Psychiatric Department, underwent clinical interview with the Semi-Structured Clinical Interview DSM-IV-I and-II (SCID-I and SCID-II) and psychometric evaluation with Impact Event Scale-Revised (IES-R) and General Health Questionnaire-12 items (GHQ-12). 44.2% and 37.4% respectively, showed high and moderate levels of psychological distress. 66.7% reported the presence of a significant post-traumatic symptoms (Post-traumatic Syndrome) with an IES-R>28, while a diagnosis of PTSD was made in 13.8% of the sample. The obsessive-compulsive trait, female sex and high level of distress (GHQ ≥20) appear to be the main risk factors for the development of PTSD than those who had a post-traumatic syndrome for which the displacement and social disruption, appear to be more associated with post-traumatic aftermaths. Our findings, in line with recent literature, confirm that a natural disaster produces an high psychological distress with long-term aftermaths. Early intervention for survivors of collective or individual trauma, regardless of the presence of a PTSD diagnosis should be a primary goal in a program of Public Health.

  9. S.S. Annunziata Church (L'Aquila, Italy) unveiled by non- and micro-destructive testing techniques

    Science.gov (United States)

    Sfarra, Stefano; Cheilakou, Eleni; Theodorakeas, Panagiotis; Paoletti, Domenica; Koui, Maria

    2017-03-01

    The present research work explores the potential of an integrated inspection methodology, combining Non-destructive testing and micro-destructive analytical techniques, for both the structural assessment of the S.S. Annunziata Church located in Roio Colle (L'Aquila, Italy) and the characterization of its wall paintings' pigments. The study started by applying passive thermal imaging for the structural monitoring of the church before and after the application of a consolidation treatment, while active thermal imaging was further used for assessing this consolidation procedure. After the earthquake of 2009, which seriously damaged the city of L'Aquila and its surroundings, part of the internal plaster fell off revealing the presence of an ancient mural painting that was subsequently investigated by means of a combined analytical approach involving portable VIS-NIR fiber optics diffuse reflectance spectroscopy (FORS) and laboratory methods, such as environmental scanning electron microscopy (ESEM) coupled with energy dispersive X-ray analysis (EDX), and attenuated total reflectance-fourier transform infrared spectroscopy (ATR-FTIR). The results obtained from the thermographic analysis provided information concerning the two different constrictive phases of the Church, enabled the assessment of the consolidation treatment, and contributed to the detection of localized problems mainly related to the rising damp phenomenon and to biological attack. In addition, the results obtained from the combined analytical approach allowed the identification of the wall painting pigments (red and yellow ochre, green earth, and smalt) and provided information on the binding media and the painting technique possibly applied by the artist. From the results of the present study, it is possible to conclude that the joint use of the above stated methods into an integrated methodology can produce the complete set of useful information required for the planning of the Church's restoration

  10. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  11. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  12. Fault isolability conditions for linear systems with additive faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  13. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  14. Fault Analysis in Cryptography

    CERN Document Server

    Joye, Marc

    2012-01-01

    In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

  15. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  16. FiSH: put fault data in a seismic hazard basket

    Science.gov (United States)

    Pace, Bruno; Visini, Francesco; Peruzza, Laura

    2016-04-01

    The practice of using fault sources in seismic hazard studies is growing in popularity, including in regions with moderate seismic activity, such as the European countries. In these areas, fault identification may be affected by similarly large uncertainties in the historical and instrumental seismic histories of more active areas that have not been inhabited for long periods of time. Certain studies have effectively applied a time-dependent perspective to combine historical and instrumental seismic data with geological and paleoseismological information, partially compensating for a lack of information. We present a package of Matlab® tools (called FiSH), in publication on Seismological Research Letters, designed to help seismic hazard modellers analyse fault data. These tools enable the derivation of expected earthquake rates given common fault data, and allow you to test the consistency between the magnitude frequency distributions assigned to a fault and some available observations. The basic assumption of FiSH is that the geometric and kinematic features of a fault are the expression of its seismogenic potential. Three tools have been designed to integrate the variable levels of information available: (a) the first tool allows users to convert fault geometry and slip rates into a global budget of the seismic moment released in a given time frame, taking uncertainties into account; (b) the second tool computes the recurrence parameters and associated uncertainties from historical and/or paleoseismological data; 
(c) the third tool outputs time-independent or time-dependent earthquake rates for different magnitude frequency distribution models. We present moreover a test case to illustrate the capabilities of FiSH, on the Paganica normal fault in Central Italy that ruptured during the L'Aquila 2009 earthquake sequence (mainshock Mw 6.3). FiSH is available at http://fish-code.com, and the source codes are open. We encourage users to handle the scripts

  17. Quaternary Fault Lines

    Data.gov (United States)

    Department of Homeland Security — This data set contains locations and information on faults and associated folds in the United States that are believed to be sources of M>6 earthquakes during the...

  18. Fault lubrication during earthquakes.

    Science.gov (United States)

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  19. Vipava fault (Slovenia

    Directory of Open Access Journals (Sweden)

    Ladislav Placer

    2008-06-01

    Full Text Available During mapping of the already accomplished Razdrto – Senožeče section of motorway and geologic surveying of construction operations of the trunk road between Razdrto and Vipava in northwestern part of External Dinarides on the southwestern slope of Mt. Nanos, called Rebrnice, a steep NW-SE striking fault was recognized, situated between the Predjama and the Ra{a faults. The fault was named Vipava fault after the Vipava town. An analysis of subrecent gravitational slips at Rebrnice indicates that they were probably associated with the activity of this fault. Unpublished results of a repeated levelling line along the regional road passing across the Vipava fault zone suggest its possible present activity. It would be meaningful to verify this by appropriate geodetic measurements, and to study the actual gravitational slips at Rebrnice. The association between tectonics and gravitational slips in this and in similar extreme cases in the areas of Alps and Dinarides points at the need of complex studying of geologic proceses.

  20. Time-dependent methodology for fault tree evaluation

    International Nuclear Information System (INIS)

    Vesely, W.B.

    1976-01-01

    Any fault tree may be evaluated applying the method called the kinetic theory of fault trees. The basic feature of this method as presented here is in that any information on primary failure, type failure or peak failure is derived from three characteristics: probability of existence, failure intensity and failure density. The determination of the said three characteristics for a given phenomenon yields the remaining probabilistic information on the individual aspects of the failure and on their totality for the whole observed period. The probabilistic characteristics are determined by applying the analysis of phenomenon probability. The total time dependent information on the peak failure is obtained by using the type failures (critical paths) of the fault tree. By applying the said process the total time dependent information is obtained for every primary failure and type failure of the fault tree. In the application of the method of the kinetic theory of fault trees represented by the PREP and KITT programmes, the type failures are first obtained using the deterministic testing method or using the Monte Carlo simulation (PREP programme). The respective characteristics are then determined using the kinetic theory of fault trees (KITT programmes). (Oy)

  1. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  2. Fault morphology of the lyo Fault, the Median Tectonic Line Active Fault System

    OpenAIRE

    後藤, 秀昭

    1996-01-01

    In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ~ ...

  3. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  4. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  5. Fault tree analysis. Implementation of the WAM-codes

    International Nuclear Information System (INIS)

    Bento, J.P.; Poern, K.

    1979-07-01

    The report describes work going on at Studsvik at the implementation of the WAM code package for fault tree analysis. These codes originally developed under EPRI contract by Sciences Applications Inc, allow, in contrast with other fault tree codes, all Boolean operations, thus allowing modeling of ''NOT'' conditions and dependent components. To concretize the implementation of these codes, the auxiliary feed-water system of the Swedish BWR Oskarshamn 2 was chosen for the reliability analysis. For this system, both the mean unavailability and the probability density function of the top event - undesired event - of the system fault tree were calculated, the latter using a Monte-Carlo simulation technique. The present study is the first part of a work performed under contract with the Swedish Nuclear Power Inspectorate. (author)

  6. Lessons from the conviction of the L'Aquila seven: The standard probabilistic earthquake hazard and risk assessment is ineffective

    Science.gov (United States)

    Wyss, Max

    2013-04-01

    An earthquake of M6.3 killed 309 people in L'Aquila, Italy, on 6 April 2011. Subsequently, a judge in L'Aquila convicted seven who had participated in an emergency meeting on March 30, assessing the probability of a major event to follow the ongoing earthquake swarm. The sentence was six years in prison, a combine fine of 2 million Euros, loss of job, loss of retirement rent, and lawyer's costs. The judge followed the prosecution's accusation that the review by the Commission of Great Risks had conveyed a false sense of security to the population, which consequently did not take their usual precautionary measures before the deadly earthquake. He did not consider the facts that (1) one of the convicted was not a member of the commission and had merrily obeyed orders to bring the latest seismological facts to the discussion, (2) another was an engineer who was not required to have any expertise regarding the probability of earthquakes, (3) and two others were seismologists not invited to speak to the public at a TV interview and a press conference. This exaggerated judgment was the consequence of an uproar in the population, who felt misinformed and even mislead. Faced with a population worried by an earthquake swarm, the head of the Italian Civil Defense is on record ordering that the population be calmed, and the vice head executed this order in a TV interview one hour before the meeting of the Commission by stating "the scientific community continues to tell me that the situation is favorable and that there is a discharge of energy." The first lesson to be learned is that communications to the public about earthquake hazard and risk must not be left in the hands of someone who has gross misunderstandings about seismology. They must be carefully prepared by experts. The more significant lesson is that the approach to calm the population and the standard probabilistic hazard and risk assessment, as practiced by GSHAP, are misleading. The later has been criticized as

  7. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    Science.gov (United States)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress

  8. Active Fault Isolation in MIMO Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    isolation is based directly on the input/output s ignals applied for the fault detection. It is guaranteed that the fault group includes the fault that had occurred in the system. The second step is individual fault isolation in the fault group . Both types of isolation are obtained by applying dedicated......Active fault isolation of parametric faults in closed-loop MIMO system s are considered in this paper. The fault isolation consists of two steps. T he first step is group- wise fault isolation. Here, a group of faults is isolated from other pos sible faults in the system. The group-wise fault...

  9. Fault Detection for Industrial Processes

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    2012-01-01

    Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

  10. Fault tree analysis

    International Nuclear Information System (INIS)

    1981-09-01

    Suggestion are made concerning the method of the fault tree analysis, the use of certain symbols in the examination of system failures. This purpose of the fault free analysis is to find logical connections of component or subsystem failures leading to undesirable occurrances. The results of these examinations are part of the system assessment concerning operation and safety. The objectives of the analysis are: systematical identification of all possible failure combinations (causes) leading to a specific undesirable occurrance, finding of reliability parameters such as frequency of failure combinations, frequency of the undesirable occurrance or non-availability of the system when required. The fault tree analysis provides a near and reconstructable documentation of the examination. (orig./HP) [de

  11. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  12. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  13. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  14. Fundamentals of Monte Carlo

    International Nuclear Information System (INIS)

    Wollaber, Allan Benton

    2016-01-01

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  15. Microcanonical Monte Carlo

    International Nuclear Information System (INIS)

    Creutz, M.

    1986-01-01

    The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena

  16. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  17. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  18. Fault Tolerant Computer Architecture

    CERN Document Server

    Sorin, Daniel

    2009-01-01

    For many years, most computer architects have pursued one primary goal: performance. Architects have translated the ever-increasing abundance of ever-faster transistors provided by Moore's law into remarkable increases in performance. Recently, however, the bounty provided by Moore's law has been accompanied by several challenges that have arisen as devices have become smaller, including a decrease in dependability due to physical faults. In this book, we focus on the dependability challenge and the fault tolerance solutions that architects are developing to overcome it. The two main purposes

  19. Fault tolerant linear actuator

    Science.gov (United States)

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  20. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  1. Purires and Picagres faults and its relationship with the 1990 Puriscal seismic sequence

    International Nuclear Information System (INIS)

    Montero, Walter; Rojas, Wilfredo

    2014-01-01

    The system of active faults in the region between the southern flank of the Montes del Aguacate and the northwest flank of the Talamanca mountain range was re-evaluated and defined in relation to the seismic activity that occurred between the end of March 1990 and the beginning of 1991. Aerial photographs of different scales of the Instituto Geografico Nacional de Costa Rica, aerial photographs of scale 1: 40000 of the TERRA project, of the Centro Nacional Geoambiental and infrared photos of scale 1: 40000 of the Mission CARTA 2003, of the Programa Nacional de Investigaciones Aerotransportadas y Sensores Remotos (PRIAS) were reviewed. Morphotectonic, structural and geological information related to the various faults was obtained with field work. A set of faults within the study area were determined with the neotectonic investigation. Several of these faults continue outside the zone both to the northwest within the Montes del Aguacate, and to the southeast to the NW foothills of the Cordillera de Talamanca. The superficial focus seismicity (<20 km), which occurred in the Puriscal area during 1990, was revised from previous studies, whose base information comes from the Red Sismologica Nacional (RSN, UCR-ICE). The relationship between the superficial seismic sequence and the defined faults was determined, allowing to conclude that the main seismic sources that originated the seismicity were the Purires and Picagres faults. A minor seismicity was related to the faults Jaris, Bajos de Jorco, Zapote and Junquillo [es

  2. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

  3. From Colfiorito to L'Aquila Earthquake: learning from the past to communicating the risk of the present

    Science.gov (United States)

    Lanza, T.; Crescimbene, M.; La Longa, F.

    2012-04-01

    Italy is a country at risk of impending earthquake in the near future. Very probably, as it has already happened in the 13 years between the last two important seismic events (Colfiorito 1997- L'Aquila 2009), there won't be enough time to solve all the problems connected to seismic risk: first of all the corruption related to politics concerning buildings; the lack of the money necessary to strengthen the already existing ones, historical centres, monuments and the masterpieces of Art; the difficult relations of the Institutions with the traditional media (newspapers, radio and TV) and, at the same time, the new media (web); the difficulties for scientists to reach important results in the immediate future due to the lack of funding and, last but not least, to the conflicting relationships inside the scientific community itself. In this scenario, communication and education play a crucial role in minimizing the risk of the population. In the present work we reconsider the past with the intent of starting to trace a path for a future strategy of risk communication where everybody involved, included the population, should do his best in order to face the next emergency.

  4. Ulna de Aquila chrysaetos hallada en un entierro ceremonial del periodo Formativo Medio en Mascota, Jalisco, México

    Directory of Open Access Journals (Sweden)

    Fabio Germán Cupul-Magaña

    2017-04-01

    Full Text Available La identificación y el análisis de los restos de aves de los sitios arqueológicos pueden proporcionar información sobre qué significaron y cómo fueron usados. En el México prehispánico las aves sirvieron como alimento, materia prima para la elaboración de herramientas y en rituales religiosos. En esta nota comentamos el hallazgo de la ulna izquierda de un águila real adulta, Aquila chrysaetos, en el yacimiento arqueológico de Los Tanques (ca. 800 a.C. en Mascota, Jalisco, México. La ulna  se encontró dentro del bulto bien envuelto del entierro de un hombre joven de entre 19 y 25 años de edad. Su presencia en el entierro indica el alto estatus social del individuo y es parte de un código ritual mortuorio.

  5. The Smart Ring Experience in l’Aquila (Italy: Integrating Smart Mobility Public Services with Air Quality Indexes

    Directory of Open Access Journals (Sweden)

    Maria-Gabriella Villani

    2016-12-01

    Full Text Available This work presents the “City Dynamics and Smart Environment” activities of the Smart Ring project, a model for the smart city, based on the integration of sustainable urban transport services and environmental monitoring over a 4–5-km circular path, the “Smart Ring”, around the historical center of l’Aquila (Italy. We describe our pilot experience performed during an experimental on-demand public service electric bus, “SmartBus”, which was equipped with a multi-parametric air quality low-cost gas electrochemical sensor platform, “NASUS IV”. For five days (28–29 August 2014 and 1–3 September 2014, the sensor platform was installed inside the SmartBus and measured air quality gas compounds (nitrogen dioxide, carbon oxide, sulfur dioxide, hydrogen sulfide during the service. Data were collected and analyzed on the bases of an air quality index, which provided qualitative insights on the air status potentially experienced by the users. The results obtained are in agreement with the synoptic meteorological conditions, the urban background air quality reference measurements and the potential traffic flow variations. Furthermore, they indicated that the air quality status was influenced by the gas component NO 2 , followed by H 2 S, SO 2 and CO. We discuss the features of our campaign, and we highlight the potential, limitations and key factors to consider for future project designs.

  6. Bowen emission from Aquila X-1: evidence for multiple components and constraint on the accretion disc vertical structure

    Science.gov (United States)

    Jiménez-Ibarra, F.; Muñoz-Darias, T.; Wang, L.; Casares, J.; Mata Sánchez, D.; Steeghs, D.; Armas Padilla, M.; Charles, P. A.

    2018-03-01

    We present a detailed spectroscopic study of the optical counterpart of the neutron star X-ray transient Aquila X-1 during its 2011, 2013 and 2016 outbursts. We use 65 intermediate resolution GTC-10.4 m spectra with the aim of detecting irradiation-induced Bowen blend emission from the donor star. While Gaussian fitting does not yield conclusive results, our full phase coverage allows us to exploit Doppler mapping techniques to independently constrain the donor star radial velocity. By using the component N III 4640.64/4641.84 Å, we measure Kem = 102 ± 6 km s-1. This highly significant detection (≳13σ) is fully compatible with the true companion star radial velocity obtained from near-infrared spectroscopy during quiescence. Combining these two velocities we determine, for the first time, the accretion disc opening angle and its associated error from direct spectroscopic measurements and detailed modelling, obtaining α = 15.5 ^{+ 2.5}_{-5} deg. This value is consistent with theoretical work if significant X-ray irradiation is taken into account and is important in the light of recent observations of GX339-4, where discrepant results were obtained between the donor's intrinsic radial velocity and the Bowen-inferred value. We also discuss the limitations of the Bowen technique when complete phase coverage is not available.

  7. Estimation of occupancy, breeding success, and predicted abundance of golden eagles (Aquila chrysaetos) in the Diablo Range, California, 2014

    Science.gov (United States)

    Wiens, J. David; Kolar, Patrick S.; Fuller, Mark R.; Hunt, W. Grainger; Hunt, Teresa

    2015-01-01

    We used a multistate occupancy sampling design to estimate occupancy, breeding success, and abundance of territorial pairs of golden eagles (Aquila chrysaetos) in the Diablo Range, California, in 2014. This method uses the spatial pattern of detections and non-detections over repeated visits to survey sites to estimate probabilities of occupancy and successful reproduction while accounting for imperfect detection of golden eagles and their young during surveys. The estimated probability of detecting territorial pairs of golden eagles and their young was less than 1 and varied with time of the breeding season, as did the probability of correctly classifying a pair’s breeding status. Imperfect detection and breeding classification led to a sizeable difference between the uncorrected, naïve estimate of the proportion of occupied sites where successful reproduction was observed (0.20) and the model-based estimate (0.30). The analysis further indicated a relatively high overall probability of landscape occupancy by pairs of golden eagles (0.67, standard error = 0.06), but that areas with the greatest occupancy and reproductive potential were patchily distributed. We documented a total of 138 territorial pairs of golden eagles during surveys completed in the 2014 breeding season, which represented about one-half of the 280 pairs we estimated to occur in the broader 5,169-square kilometer region sampled. The study results emphasize the importance of accounting for imperfect detection and spatial heterogeneity in studies of site occupancy, breeding success, and abundance of golden eagles.

  8. Genetic structure and viability selection in the golden eagle (Aquila chrysaetos), a vagile raptor with a Holarctic distribution

    Science.gov (United States)

    Doyle, Jacqueline M.; Katzner, Todd E.; Roemer, Gary; Cain, James W.; Millsap, Brian; McIntyre, Carol; Sonsthagen, Sarah A.; Fernandez, Nadia B.; Wheeler, Maria; Bulut, Zafer; Bloom, Peter; DeWoody, J. Andrew

    2016-01-01

    Molecular markers can reveal interesting aspects of organismal ecology and evolution, especially when surveyed in rare or elusive species. Herein, we provide a preliminary assessment of golden eagle (Aquila chrysaetos) population structure in North America using novel single nucleotide polymorphisms (SNPs). These SNPs included one molecular sexing marker, two mitochondrial markers, 85 putatively neutral markers that were derived from noncoding regions within large intergenic intervals, and 74 putatively nonneutral markers found in or very near protein-coding genes. We genotyped 523 eagle samples at these 162 SNPs and quantified genotyping error rates and variability at each marker. Our samples corresponded to 344 individual golden eagles as assessed by unique multilocus genotypes. Observed heterozygosity of known adults was significantly higher than of chicks, as was the number of heterozygous loci, indicating that mean zygosity measured across all 159 autosomal markers was an indicator of fitness as it is associated with eagle survival to adulthood. Finally, we used chick samples of known provenance to test for population differentiation across portions of North America and found pronounced structure among geographic sampling sites. These data indicate that cryptic genetic population structure is likely widespread in the golden eagle gene pool, and that extensive field sampling and genotyping will be required to more clearly delineate management units within North America and elsewhere.

  9. Biotelemetry data for golden eagles (Aquila chrysaetos) captured in coastal southern California, November 2014–February 2016

    Science.gov (United States)

    Tracey, Jeff A.; Madden, Melanie C.; Sebes, Jeremy B.; Bloom, Peter H.; Katzner, Todd E.; Fisher, Robert N.

    2016-04-21

    The status of golden eagles (Aquila chrysaetos) in coastal southern California is unclear. To address this knowledge gap, the U.S. Geological Survey (USGS) in collaboration with local, State, and other Federal agencies began a multi-year survey and tracking program of golden eagles to address questions regarding habitat use, movement behavior, nest occupancy, genetic population structure, and human impacts on eagles. Golden eagle trapping and tracking efforts began in October 2014 and continued until early March 2015. During the first trapping season that focused on San Diego County, we captured 13 golden eagles (8 females and 5 males). During the second trapping season that began in November 2015, we focused on trapping sites in San Diego, Orange, and western Riverside Counties. By February 23, 2016, we captured an additional 14 golden eagles (7 females and 7 males). In this report, biotelemetry data were collected between November 22, 2014, and February 23, 2016. The location data for eagles ranged as far north as San Luis Obispo, California, and as far south as La Paz, Baja California, Mexico.

  10. Fault management and systems knowledge

    Science.gov (United States)

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  11. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2002-03-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene

  12. Fault diagnosis of induction motors

    CERN Document Server

    Faiz, Jawad; Joksimović, Gojko

    2017-01-01

    This book is a comprehensive, structural approach to fault diagnosis strategy. The different fault types, signal processing techniques, and loss characterisation are addressed in the book. This is essential reading for work with induction motors for transportation and energy.

  13. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2002-03-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene.

  14. The 2016-2017 central Italy coseismic surface ruptures and their meaning with respect to foreseen active fault systems segmentation

    Science.gov (United States)

    De Martini, P. M.; Pucci, S.; Villani, F.; Civico, R.; Del Rio, L.; Cinti, F. R.; Pantosti, D.

    2017-12-01

    In 2016-2017 a series of moderate to large normal faulting earthquakes struck central Italy producing severe damage in many towns including Amatrice, Norcia and Visso and resulting in 299 casualties and >20,000 homeless. The complex seismic sequence depicts a multiple activation of the Mt. Vettore-Mt. Bove (VBFS) and the Laga Mts. fault systems, which were considered in literature as independent segments characterizing a recent seismic gap in the region comprised between two modern seismic sequences: the 1997-1998 Colfiorito and the 2009 L'Aquila. We mapped in detail the coseismic surface ruptures following three mainshocks (Mw 6.0 on 24th August, Mw 5.9 and Mw 6.5 on 26th and 30th October, 2016, respectively). Primary surface ruptures were observed and recorded for a total length of 5.2 km, ≅10 km and ≅25 km, respectively, along closely-spaced, parallel or subparallel, overlapping or step-like synthetic and antithetic fault splays of the activated fault systems, in some cases rupturing repeatedly the same location. Some coseismic ruptures were mapped also along the Norcia Fault System, paralleling the VBFS about 10 km westward. We recorded geometric and kinematic characteristics of the normal faulting ruptures with an unprecedented detail thanks to almost 11,000 oblique photographs taken from helicopter flights soon after the mainshocks, verified and integrated with field data (more than 7000 measurements). We analyze the along-strike coseismic slip and slip vectors distribution to be observed in the context of the geomorphic expression of the disrupted slopes and their depositional and erosive processes. Moreover, we constructed 1:10.000 scale geologic cross-sections based on updated maps, and we reconstructed the net offset distribution of the activated fault system to be compared with the morphologic throws and to test a cause-effect relationship between faulting and first-order landforms. We provide a reconstruction of the 2016 coseismic rupture pattern as

  15. Seismic Supercycles of Normal Faults in Central Italy over Various Time Scales Revealed by 36Cl Cosmogenic Dating

    Science.gov (United States)

    Benedetti, L. C.; Tesson, J.; Perouse, E.; Puliti, I.; Fleury, J.; Rizza, M.; Billant, J.; Pace, B.

    2017-12-01

    The use of 36Cl cosmogenic nuclide as a paleoseismological tool for normal faults in the Mediterranean has revolutionized our understanding of their seismic cycle (Gran Mitchell et al. 2001, Benedetti et al. 2002). Here we synthetized results obtained on 13 faults in Central Italy. Those records cover a period of 8 to 45 ka. The mean recurrence time of retrieved seismic events is 5.5 ±6 ka, with a mean slip per event of 2.5 ± 1.8 m and a mean slip-rate from 0.1 to 2.4 mm/yr. Most retrieved events correspond to single events according to scaling relationships. This is also supported by the 2 m-high co-seismic slip observed on the Mt Vettore fault after the October 30, 2016 M6.5 earthquake in Central Italy (EMERGEO working group). Our results suggest that all faults have experienced one or several periods of slip acceleration with bursts of seismic activity, associated with very high slip-rate of 1.7-9 mm/yr, corresponding to 2-20 times their long-term slip-rate. The duration of those bursts is variable from a fault to another (from recurrence time. This might suggest that the seismic activity of those faults could be controlled by their intrinsic properties (e.g. long-term slip-rate, fault-length, state of structural maturity). Our results also show events clustering with several faults rupturing in less than 500 yrs on adjacent or distant faults within the studied area. The Norcia-Amatrice seismic sequence, ≈ 50 km north of our study area, also evidenced this clustering behaviour, with over the last 20 yrs several successive events of Mw 5 to 6.5 (from north to south: Colfiorito 1997 Mw6.0, Norcia 2016 Mw6.5, L'Aquila 2009 Mw6.3), rupturing various fault systems, over a total length of ≈100 km. This sequence will allow to better understand earthquake kinematics and spatiotemporal slip distribution during those seismic bursts.

  16. Introduction to fault tree analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Lambert, H.E.

    1975-01-01

    An elementary, engineering oriented introduction to fault tree analysis is presented. The basic concepts, techniques and applications of fault tree analysis, FTA, are described. The two major steps of FTA are identified as (1) the construction of the fault tree and (2) its evaluation. The evaluation of the fault tree can be qualitative or quantitative depending upon the scope, extensiveness and use of the analysis. The advantages, limitations and usefulness of FTA are discussed

  17. Fault Tolerant Wind Farm Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2013-01-01

    In the recent years the wind turbine industry has focused on optimizing the cost of energy. One of the important factors in this is to increase reliability of the wind turbines. Advanced fault detection, isolation and accommodation are important tools in this process. Clearly most faults are deal...... scenarios. This benchmark model is used in an international competition dealing with Wind Farm fault detection and isolation and fault tolerant control....

  18. Orion GN&C Fault Management System Verification: Scope And Methodology

    Science.gov (United States)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  19. Row fault detection system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  20. Fault isolation techniques

    Science.gov (United States)

    Dumas, A.

    1981-01-01

    Three major areas that are considered in the development of an overall maintenance scheme of computer equipment are described. The areas of concern related to fault isolation techniques are: the programmer (or user), company and its policies, and the manufacturer of the equipment.

  1. Fault Tolerant Control Systems

    DEFF Research Database (Denmark)

    Bøgh, S. A.

    This thesis considered the development of fault tolerant control systems. The focus was on the category of automated processes that do not necessarily comprise a high number of identical sensors and actuators to maintain safe operation, but still have a potential for improving immunity to component...

  2. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  3. Monts Jura Jazz Festival

    CERN Multimedia

    Jazz Club

    2012-01-01

    The 5th edition of the "Monts Jura Jazz Festival" that will take place on September 21st and 22nd 2012 at the Esplanade du Lac in Divonne-les-Bains. This festival is organized by the "CERN Jazz Club" with the support of the "CERN Staff Association". This festival is a major musical event in the French/Swiss area and proposes a world class program with jazz artists such as D.Lockwood and D.Reinhardt. More information on http://www.jurajazz.com.

  4. Monts Jura Jazz Festival

    CERN Document Server

    2012-01-01

    The 5th edition of the "Monts Jura Jazz Festival" will take place at the Esplanade du Lac in Divonne-les-Bains, France on September 21 and 22. This festival organized by the CERN Jazz Club and supported by the CERN Staff Association is becoming a major musical event in the Geneva region. International Jazz artists like Didier Lockwood and David Reinhardt are part of this year outstanding program. Full program and e-tickets are available on the festival website. Don't miss this great festival!

  5. Fault-tolerant computing systems

    International Nuclear Information System (INIS)

    Dal Cin, M.; Hohl, W.

    1991-01-01

    Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

  6. Fault rocks and uranium mineralization

    International Nuclear Information System (INIS)

    Tong Hangshou.

    1991-01-01

    The types of fault rocks, microstructural characteristics of fault tectonite and their relationship with uranium mineralization in the uranium-productive granite area are discussed. According to the synthetic analysis on nature of stress, extent of crack and microstructural characteristics of fault rocks, they can be classified into five groups and sixteen subgroups. The author especially emphasizes the control of cataclasite group and fault breccia group over uranium mineralization in the uranium-productive granite area. It is considered that more effective study should be made on the macrostructure and microstructure of fault rocks. It is of an important practical significance in uranium exploration

  7. Network Fault Diagnosis Using DSM

    Institute of Scientific and Technical Information of China (English)

    Jiang Hao; Yan Pu-liu; Chen Xiao; Wu Jing

    2004-01-01

    Difference similitude matrix (DSM) is effective in reducing information system with its higher reduction rate and higher validity. We use DSM method to analyze the fault data of computer networks and obtain the fault diagnosis rules. Through discretizing the relative value of fault data, we get the information system of the fault data. DSM method reduces the information system and gets the diagnosis rules. The simulation with the actual scenario shows that the fault diagnosis based on DSM can obtain few and effective rules.

  8. [Between perception and reality: towards an assessment of socio-territorial discomfort in L'Aquila (Central Italy) after the earthquak].

    Science.gov (United States)

    Calandra, Lina Maria

    2016-01-01

    to consider perceptions and narratives of the inhabitants of L'Aquila about their context of life, in order to point out what kind of relationship is present in L'Aquila, between the territory and its inhabitants after the earthquake; to evaluate how and where symptomatic attitudes about a widespread discomfort in social interactions have been generalized. since 2010, the joint work by the research team of the "Cartolab" laboratory and pedagogy area (Department of Human Studies, University of L'Aquila) has developed and applied a participatory research methodology. This methodology is both an inquiry used by experts to increase the participation of people who experience the everyday life in L'Aquila, and a tool to draw moral, ethical and political considerations in order to activate change in social and political dynamics at the urban scale. During 2013, the methodology of Participatory- Participating Research Action (PPRA) was implemented through cycles of territorial meetings involving citizens and municipal administrators. These meetings have been promoted and organized with the Office of Participation of the Municipality of L'Aquila. the PPRA aimed to assess: 1. the social, political and economic quality of the territory evaluated by people involved in the survey, with reference to life conditions, living context, and future projections of self and of the territory; 2. the perception of the security. Through a qualitative/quantitative approach, the data collected through questionnaire and public meetings have involved 309 young (16-30 years old) and 227 adults (31-85 years old) for the first aspect, and 314 citizens (16-80 years old) for the second aspect, respectively. the results highlight a socioterritorial discomfort emerging in L'Aquila for a relevant part of the population. This discomfort is shaped by a negative rating on life conditions and context: adults provide poor quality evaluations about the present and cannot figure out some kind of vision for

  9. Surface Temperature and Precipitation Affecting GPS Signals Before the 2009 L'Aquila Earthquake (Central Italy).

    Science.gov (United States)

    Crescentini, L.; Amoruso, A.; Chiaraluce, L.

    2017-12-01

    This work focuses on GPS time series recorded before the Mw 6.1 earthquake which struck Central Italy in April 2009. It shows how environmental noise effects may be subtle and relevant when investigating relatively small strain signals and how the availability of data from weather stations and water level sensors co-located with GPS stations may provide critical information which must be taken into consideration while dealing with deformation signals.The preparatory phase of a large earthquake may include both seismic (foreshocks) and aseismic (slow slip event, SSE) deforming episodes but, unlike afterslip, no slow event has yet been recorded before moderate earthquakes, even when they occurred close to high-sensitivity strain meters. An exception to this seems to be represented by the 2009 earthquake. The main shock was preceded by a foreshock sequence lasting 6 months; it has been claimed that an analysis of continuous GPS data shows that during the foreshock sequence a 5.9 Mw SSE occurred along a decollement located beneath the reactivated normal fault system. This hypothesized SSE, that started in the middle of February 2009 and lasted for almost two weeks, would have eventually loaded the largest foreshock and the main shock.We show that the strain signal that the SSE would have generated at two laser strainmeters operating at about 20 km NE from the SSE source was essentially undetected. On the contrary, a transient signal is present in temperature and precipitation time series recorded close to the GPS station, MTTO, that has largest signal referred to the SSE, implying that these contaminated the GPS record. This interpretation is corroborated by the strong similarity, during the coldest winter months, between the displacement data of MTTO and a linear combination of filtered temperature and precipitation data, mimicking simple heat conduction and snow accumulation/removal processes. Such a correlation between displacement and environmental data is missing

  10. Why local people did not present a problem in the 2016 Kumamoto earthquake, Japan though people accused in the 2009 L'Aquila earthquake?

    Science.gov (United States)

    Sugimoto, M.

    2016-12-01

    Risk communication is a big issues among seismologists after the 2009 L'Aquila earthquake all over the world. A lot of people remember 7 researchers as "L'Aquila 7" were accused in Italy. Seismologists said it is impossible to predict an earthquake by science technology today and join more outreach activities. "In a subsequent inquiry of the handling of the disaster, seven members of the Italian National Commission for the Forecast and Prevention of Major Risks were accused of giving "inexact, incomplete and contradictory" information about the danger of the tremors prior to the main quake. On 22 October 2012, six scientists and one ex-government official were convicted of multiple manslaughter for downplaying the likelihood of a major earthquake six days before it took place. They were each sentenced to six years' imprisonment (Wikipedia)". Finally 6 scientists are not guilty. The 2016 Kumamoto earthquake hit Kyushu, Japan in April. They are very similar seismological situations between the 2016 Kumamoto earthquake and the 2009 L'Aquila earthquake. The foreshock was Mj6.5 and Mw6.2 in 14 April 2016. The main shock was Mj7.3 and Mw7.0. Japan Metrological Agency (JMA) misleaded foreshock as mainshock before main shock occured. 41 people died by the main shock in Japan. However local people did not accused scientists in Japan. It has been less big earhquakes around 100 years in Kumamoto. Poeple was not so matured that they treated earthquake information in Kyushu, Japan. How are there differences between Japan and Italy? We learn about outreach activities for sciencits from this case.

  11. Pediatric Epidemic of Salmonella enterica Serovar Typhimurium in the Area of L’Aquila, Italy, Four Years after a Catastrophic Earthquake

    Directory of Open Access Journals (Sweden)

    Giovanni Nigro

    2016-05-01

    Full Text Available Background: A Salmonella enterica epidemic occurred in children of the area of L’Aquila (Central Italy, Abruzzo region between June 2013 and October 2014, four years after the catastrophic earthquake of 6 April 2009. Methods: Clinical and laboratory data were collected from hospitalized and ambulatory children. Routine investigations for Salmonella infection were carried out on numerous alimentary matrices of animal origin and sampling sources for drinking water of the L’Aquila district, including pickup points of the two main aqueducts. Results: Salmonella infection occurred in 155 children (83 females: 53%, aged 1 to 15 years (mean 2.10. Of these, 44 children (28.4% were hospitalized because of severe dehydration, electrolyte abnormalities, and fever resistant to oral antipyretic and antibiotic drugs. Three children (1.9% were reinfected within four months after primary infection by the same Salmonella strain. Four children (2.6%, aged one to two years, were coinfected by rotavirus. A seven-year old child had a concomitant right hip joint arthritis. The isolated strains, as confirmed in about the half of cases or probable/possible in the remaining ones, were identified as S. enterica serovar Typhimurium [4,5:i:-], monophasic variant. Aterno river, bordering the L’Aquila district, was recognized as the main responsible source for the contamination of local crops and vegetables derived from polluted crops. Conclusions: The high rate of hospitalized children underlines the emergence of a highly pathogenic S. enterica strain probably subsequent to the contamination of the spring water sources after geological changes occurred during the catastrophic earthquake.

  12. Pediatric Epidemic of Salmonella enterica Serovar Typhimurium in the Area of L'Aquila, Italy, Four Years after a Catastrophic Earthquake.

    Science.gov (United States)

    Nigro, Giovanni; Bottone, Gabriella; Maiorani, Daniela; Trombatore, Fabiana; Falasca, Silvana; Bruno, Gianfranco

    2016-05-06

    A Salmonella enterica epidemic occurred in children of the area of L'Aquila (Central Italy, Abruzzo region) between June 2013 and October 2014, four years after the catastrophic earthquake of 6 April 2009. Clinical and laboratory data were collected from hospitalized and ambulatory children. Routine investigations for Salmonella infection were carried out on numerous alimentary matrices of animal origin and sampling sources for drinking water of the L'Aquila district, including pickup points of the two main aqueducts. Salmonella infection occurred in 155 children (83 females: 53%), aged 1 to 15 years (mean 2.10). Of these, 44 children (28.4%) were hospitalized because of severe dehydration, electrolyte abnormalities, and fever resistant to oral antipyretic and antibiotic drugs. Three children (1.9%) were reinfected within four months after primary infection by the same Salmonella strain. Four children (2.6%), aged one to two years, were coinfected by rotavirus. A seven-year old child had a concomitant right hip joint arthritis. The isolated strains, as confirmed in about the half of cases or probable/possible in the remaining ones, were identified as S. enterica serovar Typhimurium [4,5:i:-], monophasic variant. Aterno river, bordering the L'Aquila district, was recognized as the main responsible source for the contamination of local crops and vegetables derived from polluted crops. The high rate of hospitalized children underlines the emergence of a highly pathogenic S. enterica strain probably subsequent to the contamination of the spring water sources after geological changes occurred during the catastrophic earthquake.

  13. [The hazards of reconstruction: anthropology of dwelling and social health risk in the L'Aquila (Central Italy) post-earthquake].

    Science.gov (United States)

    Ciccozzi, Antonello

    2016-01-01

    Even starting from the purpose of restoring the damage caused by a natural disaster, the post-earthquake reconstructions imply the risk of triggering a set of social disasters that may affect the public health sphere. In the case of the L'Aquila earthquake this risk seems to emerge within the urban planning on two levels of dwelling: at a landscape level, where there has been a change in the shape of the city towards a sprawling-sprinkling process; at an architectural level, on the problematic relationship between the politics and the poetics of cultural heritage protection and the goal to get restoration works capable to ensure the citizens seismic safety.

  14. [Sleep disturbances and spatial memory deficits in post-traumatic stress disorder: the case of L'Aquila (Central Italy)].

    Science.gov (United States)

    Ferrara, Michele; Mazza, Monica; Curcio, Giuseppe; Iaria, Giuseppe; De Gennaro, Luigi; Tempesta, Daniela

    2016-01-01

    Altered sleep is a common and central symptom of post-traumatic stress disorder (PTSD). In fact, sleep disturbances are included in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) diagnostic criteria for PTSD. However, it has been hypothesized that sleep disturbances are crucially involved in the aetiology of PTSD, rather than being solely a symptom arising secondarily from this disorder. Therefore, knowing the long-term effects of a trauma can be essential to establish the need of specific interventions for the prevention and treatment of mental disorders that may persist years after a traumatic experience. In one study we showed, for the first time, that even after a period of two years people exposed to a catastrophic disaster such as the L'Aquila earthquake continue to suffer from a reduced sleep quality. Moreover, we observed that sleep quality scores decreased as a function of the proximity to the epicentre, suggesting that the psychological effects of an earthquake may be pervasive and long-lasting. It has been widely shown that disruption of sleep by acute stress may lead to deterioration in memory processing. In fact, in a recent study we observed alterations in spatial memory in PTSD subjects. Our findings indicated that PTSD is accompanied by an impressive deficit in forming a cognitive map of the environment, as well as in sleep-dependent memory consolidation. The fact that this deterioration was correlated to the subjective sleep disturbances in our PTSD group demonstrates the existence of an intimate relationship between sleep, memory consolidation, and stress.

  15. Summer and winter space use and home range characteristics of Golden Eagles (Aquila chrysaetos) in eastern North America

    Science.gov (United States)

    Miller, Tricia A.; Brooks, Robert P.; Lanzone, Michael J.; Cooper, Jeff; O'Malley, Kieran; Brandes, David; Duerr, Adam E.; Katzner, Todd

    2017-01-01

    Movement behavior and its relationship to habitat provide critical information toward understanding the effects of changing environments on birds. The eastern North American population of Golden Eagles (Aquila chrysaetos) is a genetically distinct and small population of conservation concern. To evaluate the potential responses of this population to changing landscapes, we calculated the home range and core area sizes of 52 eagles of 6 age–sex classes during the summer and winter seasons. Variability in range size was related to variation in topography and open cover, and to age and sex. In summer, eagle ranges that were smaller had higher proportions of ridge tops and open cover and had greater topographic roughness than did larger ranges. In winter, smaller ranges had higher proportions of ridge tops, hillsides and cliffs, and open cover than did larger ranges. All age and sex classes responded similarly to topography and open cover in both seasons. Not surprisingly, adult eagles occupied the smallest ranges in both seasons. Young birds used larger ranges than adults, and subadults in summer used the largest ranges (>9,000 km2). Eastern adult home ranges in summer were 2–10 times larger than those reported for other populations in any season. Golden Eagles in eastern North America may need to compensate for generally lower-quality habitat in the region by using larger ranges that support access to adequate quantities of resources (prey, updrafts, and nesting, perching, and roosting sites) associated with open cover and diverse topography. Our results suggest that climate change–induced afforestation on the breeding grounds and ongoing land cover change from timber harvest and energy development on the wintering grounds may affect the amount of suitable habitat for Golden Eagles in eastern North America.

  16. Faults in Linux

    DEFF Research Database (Denmark)

    Palix, Nicolas Jean-Michel; Thomas, Gaël; Saha, Suman

    2011-01-01

    In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

  17. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  18. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2003-02-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene

  19. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  20. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2003-02-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene.

  1. Real-time fault diagnosis and fault-tolerant control

    OpenAIRE

    Gao, Zhiwei; Ding, Steven X.; Cecati, Carlo

    2015-01-01

    This "Special Section on Real-Time Fault Diagnosis and Fault-Tolerant Control" of the IEEE Transactions on Industrial Electronics is motivated to provide a forum for academic and industrial communities to report recent theoretic/application results in real-time monitoring, diagnosis, and fault-tolerant design, and exchange the ideas about the emerging research direction in this field. Twenty-three papers were eventually selected through a strict peer-reviewed procedure, which represent the mo...

  2. MONTE and ANAL1

    International Nuclear Information System (INIS)

    Lupton, L.R.; Keller, N.A.

    1982-09-01

    The design of a positron emission tomography (PET) ring camera involves trade-offs between such things as sensitivity, resolution and cost. As a design aid, a Monte Carlo simulation of a single-ring camera system has been developed. The model includes a source-filled phantom, collimators, detectors, and optional shadow shields and inter-crystal septa. Individual gamma rays are tracked within the system materials until they escape, are absorbed, or are detected. Compton and photelectric interactions are modelled. All system dimensions are variable within the computation. Coincidence and singles data are recorded according to type (true or scattered), annihilation origin, and detected energy. Photon fluxes at various points of interest, such as the edge of the phantom and the collimator, are available. This report reviews the basics of PET, describes the physics involved in the simulation, and provides detailed outlines of the routines

  3. Frost in Charitum Montes

    Science.gov (United States)

    2003-01-01

    MGS MOC Release No. MOC2-387, 10 June 2003This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.

  4. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan

    2017-05-31

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  5. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan; Hanafy, Sherif; Guo, Bowen; Kosmicki, Maximillian Sunflower

    2017-01-01

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  6. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  7. Wilshire fault: Earthquakes in Hollywood?

    Science.gov (United States)

    Hummon, Cheryl; Schneider, Craig L.; Yeats, Robert S.; Dolan, James F.; Sieh, Kerry E.; Huftile, Gary J.

    1994-04-01

    The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

  8. What is Fault Tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Frei, C. W.; Kraus, K.

    2000-01-01

    Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to the plant, to personnel or the environment. Fault-tolerant control is the synonym for a set of recent techniques that were developed to increase plant...... availability and reduce the risk of safety hazards. Its aim is to prevent that simple faults develop into serious failure. Fault-tolerant control merges several disciplines to achieve this goal, including on-line fault diagnosis, automatic condition assessment and calculation of remedial actions when a fault...... is detected. The envelope of the possible remedial actions is wide. This paper introduces tools to analyze and explore structure and other fundamental properties of an automated system such that any redundancy in the process can be fully utilized to enhance safety and a availability....

  9. Remote Sensing of Urban Microclimate Change in L’Aquila City (Italy after Post-Earthquake Depopulation in an Open Source GIS Environment

    Directory of Open Access Journals (Sweden)

    Valerio Baiocchi

    2017-02-01

    Full Text Available This work reports a first attempt to use Landsat satellite imagery to identify possible urban microclimate changes in a city center after a seismic event that affected L’Aquila City (Abruzzo Region, Italy, on 6 April 2009. After the main seismic event, the collapse of part of the buildings, and the damaging of most of them, with the consequence of an almost total depopulation of the historic city center, may have caused alterations to the microclimate. This work develops an inexpensive work flow—using Landsat Enhanced Thematic Mapper Plus (ETM+ scenes—to construct the evolution of urban land use after the catastrophic main seismic event that hit L’Aquila. We hypothesized, that, possibly, before the event, the temperature was higher in the city center due to the presence of inhabitants (and thus home heating; while the opposite case occurred in the surrounding areas, where new settlements of inhabitants grew over a period of a few months. We decided not to look to independent meteorological data in order to avoid being biased in their investigations; thus, only the smallest dataset of Landsat ETM+ scenes were considered as input data in order to describe the thermal evolution of the land surface after the earthquake. We managed to use the Landsat archive images to provide thermal change indications, useful for understanding the urban changes induced by catastrophic events, setting up an easy to implement, robust, reproducible, and fast procedure.

  10. Multi-Directional Seismic Assessment of Historical Masonry Buildings by Means of Macro-Element Modelling: Application to a Building Damaged during the L’Aquila Earthquake (Italy

    Directory of Open Access Journals (Sweden)

    Francesco Cannizzaro

    2017-11-01

    Full Text Available The experience of the recent earthquakes in Italy caused a shocking impact in terms of loss of human life and damage in buildings. In particular, when it comes to ancient constructions, their cultural and historical value overlaps with the economic and social one. Among the historical structures, churches have been the object of several studies which identified the main characteristics of the seismic response and the most probable collapse mechanisms. More rarely, academic studies have been devoted to ancient palaces, since they often exhibit irregular and complicated arrangement of the resisting elements, which makes their response very difficult to predict. In this paper, a palace located in L’Aquila, severely damaged by the seismic event of 2009 is the object of an accurate study. A historical reconstruction of the past strengthening interventions as well as a detailed geometric relief is performed to implement detailed numerical models of the structure. Both global and local models are considered and static nonlinear analyses are performed considering the influence of the input direction on the seismic vulnerability of the building. The damage pattern predicted by the numerical models is compared with that observed after the earthquake. The seismic vulnerability assessments are performed in terms of ultimate peak ground acceleration (PGA using capacity curves and the Italian code spectrum. The results are compared in terms of ultimate ductility demand evaluated performing nonlinear dynamic analyses considering the actual registered seismic input of L’Aquila earthquake.

  11. Advanced cloud fault tolerance system

    Science.gov (United States)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  12. Final Technical Report: PV Fault Detection Tool.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Christian Birk [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  13. Fault current limiter

    Science.gov (United States)

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  14. A Collision Risk Model to Predict Avian Fatalities at Wind Facilities: An Example Using Golden Eagles, Aquila chrysaetos.

    Science.gov (United States)

    New, Leslie; Bjerre, Emily; Millsap, Brian; Otto, Mark C; Runge, Michael C

    2015-01-01

    Wind power is a major candidate in the search for clean, renewable energy. Beyond the technical and economic challenges of wind energy development are environmental issues that may restrict its growth. Avian fatalities due to collisions with rotating turbine blades are a leading concern and there is considerable uncertainty surrounding avian collision risk at wind facilities. This uncertainty is not reflected in many models currently used to predict the avian fatalities that would result from proposed wind developments. We introduce a method to predict fatalities at wind facilities, based on pre-construction monitoring. Our method can directly incorporate uncertainty into the estimates of avian fatalities and can be updated if information on the true number of fatalities becomes available from post-construction carcass monitoring. Our model considers only three parameters: hazardous footprint, bird exposure to turbines and collision probability. By using a Bayesian analytical framework we account for uncertainties in these values, which are then reflected in our predictions and can be reduced through subsequent data collection. The simplicity of our approach makes it accessible to ecologists concerned with the impact of wind development, as well as to managers, policy makers and industry interested in its implementation in real-world decision contexts. We demonstrate the utility of our method by predicting golden eagle (Aquila chrysaetos) fatalities at a wind installation in the United States. Using pre-construction data, we predicted 7.48 eagle fatalities year-1 (95% CI: (1.1, 19.81)). The U.S. Fish and Wildlife Service uses the 80th quantile (11.0 eagle fatalities year-1) in their permitting process to ensure there is only a 20% chance a wind facility exceeds the authorized fatalities. Once data were available from two-years of post-construction monitoring, we updated the fatality estimate to 4.8 eagle fatalities year-1 (95% CI: (1.76, 9.4); 80th quantile, 6

  15. Facebook, quality of life, and mental health outcomes in post-disaster urban environments: the l'aquila earthquake experience.

    Science.gov (United States)

    Masedu, Francesco; Mazza, Monica; Di Giovanni, Chiara; Calvarese, Anna; Tiberti, Sergio; Sconci, Vittorio; Valenti, Marco

    2014-01-01

    An understudied area of interest in post-disaster public health is individuals' use of social networks as a potential determinant of quality of life (QOL) and mental health outcomes. A population-based cross-sectional study was carried out to examine whether continual use of online social networking (Facebook) in an adult population following a massive earthquake was correlated with prevalence of depression and post-traumatic stress disorders (PTSD) and QOL outcomes. Participants were a sample of 890 adults aged 25-54 who had been exposed to the L'Aquila earthquake of 2009. Definition of "user" required a daily connection to the Facebook online social network for more than 1 h per day from at least 2 years. Depression and PTSD were assessed using the Screening Questionnaire for Disaster Mental Health. QOL outcomes were measured using the World Health Organisation Quality of Life BREF (WHOQOL-BREF) instrument. Logistic regression was carried out to calculate the prevalence odds ratios (POR) for social network use and other covariates. Two hundred and twenty one of 423 (52.2%) men, and 195 of 383 (50.9%) women, had been using Facebook as social network for at least 2 years prior to our assessment. Social network use correlated with both depression and PTSD, after adjusting for gender. A halved risk of depression was found in users vs. non-users (POR 0.50 ± 0.16). Similarly, a halved risk of PTSD in users vs. non-users (POR 0.47 ± 0.14) was found. Both men and women using online social networks had significantly higher QOL scores in the psychological and social domains of the WHOQOL-BREF. Social network use among adults 25-54 years old has a positive impact on mental health and QOL outcomes in the years following a disaster. The use of social networks may be an important tool for coping with the mental health outcomes of disruptive natural disasters, helping to maintain, if not improve, QOL in terms of social relationships and psychological distress.

  16. Facebook, quality of life and mental health outcomes in post-disaster urban environments: the L’Aquila earthquake experience

    Directory of Open Access Journals (Sweden)

    Francesco eMasedu

    2014-12-01

    Full Text Available BackgroundAn understudied area of interest in post-disaster public health is individuals’ use of social networks as a potential determinant of quality of life (QOL and mental health outcomes. A population-based cross-sectional study was carried out to examine whether continual use of online social networking (Facebook in an adult population following a massive earthquake was correlated with prevalence of depression and PTSD and QOL outcomes. MethodsParticipants were a sample of 890 adults aged 25 to 54 who had been exposed to the L'Aquila earthquake of 2009. Definition of user required a daily connection to the Facebook online social network for more than one hour per day from at least two years.Depression and PTSD were assessed using the Screening Questionnaire for Disaster Mental Health (SQD. QOL outcomes were measured using the WHOQOL-BREF instrument. Logistic regression was carried out to calculate the prevalence odds ratios (POR for social network use and other covariates.ResultsTwo hundred and twenty one of 423 (52.2% men, and 195 of 383 (50.9% women, had been using Facebook as social network for at least two years prior to our assessment. Social network use correlated with both depression and PTSD, after adjusting for gender. A halved risk of depression was found in users vs. non-users (POR 0.50±0.16. Similarly, a halved risk of PTSD in users vs. non-users (POR 0.47±0.14 was found. Both men and women using online social networks had significantly higher QOL scores in the psychological and social domains of the WHOQOL-BREF.ConclusionsSocial network use among adults 25 to 54 years old has a positive impact on mental health and QOL outcomes in the years following a disaster. The use of social networks may be an important tool for coping with the mental health outcomes of disruptive natural disasters, helping to maintain, if not improve, QOL in terms of social relationships and psychological distress.

  17. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  18. Fault Management Design Strategies

    Science.gov (United States)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  19. Accelerometer having integral fault null

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-08-01

    An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

  20. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  1. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  2. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  3. Geology of Maxwell Montes, Venus

    Science.gov (United States)

    Head, J. W.; Campbell, D. B.; Peterfreund, A. R.; Zisk, S. A.

    1984-01-01

    Maxwell Montes represent the most distinctive topography on the surface of Venus, rising some 11 km above mean planetary radius. The multiple data sets of the Pioneer missing and Earth based radar observations to characterize Maxwell Montes are analyzed. Maxwell Montes is a porkchop shaped feature located at the eastern end of Lakshmi Planum. The main massif trends about North 20 deg West for approximately 1000 km and the narrow handle extends several hundred km West South-West WSW from the north end of the main massif, descending down toward Lakshmi Planum. The main massif is rectilinear and approximately 500 km wide. The southern and northern edges of Maxwell Montes coincide with major topographic boundaries defining the edge of Ishtar Terra.

  4. Slope instability mapping around L'Aquila (Abruzzo, Italy) with Persistent Scatterers Interferometry from ERS, ENVISAT and RADARSAT datasets

    Science.gov (United States)

    Righini, Gaia; Del Conte, Sara; Cigna, Francesca; Casagli, Nicola

    2010-05-01

    In the last decade Persistent Scatterers Interferometry (PSI) was used in natural hazards investigations with significant results and it is considered a helpful tool in ground deformations detection and mapping (Berardino et. al., 2003; Colesanti et al., 2003; Colesanti & Wasowski, 2006; Hilley et al., 2004). In this work results of PSI processing were interpreted after the main seismic shock that affected the Abruzzo region (Central Italy) on 6th of April 2009, in order to carry out a slope instability mapping according to the requirement of National Department of Civil Protection and in the framework of the Landslides thematic services of the EU FP7 project ‘SAFER' (Services and Applications For Emergency Response - Grant Agreement n° 218802). The area of interest was chosen in almost 460 km2 around L'Aquila according the highest probability of reactivations of landslides which depends on the local geological conditions, on the epicenter location and on other seismic parameters (Keefer, 1984). The radar images datasets were collected in order to provide estimates of the mean yearly velocity referred to two distinct time intervals: historic ERS (1992-2000) and recent ENVISAT (2002-2009), RADARSAT (2003-2009); the ERS and RADARSAT images were processed by Tele-Rilevamento Europa (TRE) using PS-InSAR(TM) technique, while the ENVISAT images were processed by e-GEOS using PSP-DIFSAR technique. A pre-existing landslide inventory map was updated through the integration of conventional photo interpretation and the radar-interpretation chain, as defined by Farina et al. (2008) and reported in literature (Farina et al. 2006, Meisina et al. 2007, Pancioli et al., 2008; Righini et al., 2008, Casagli et al., 2008, Herrera et al., 2009). The data were analyzed and interpreted in Geographic Information System (GIS) environment. Main updates of the pre-existing landslides are focusing on the identification of new landslides, modification of boundaries through the spatial

  5. Fault isolatability conditions for linear systems

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Henrik

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...... the faults have occurred. The last step is a fault isolation (FI) of the faults occurring in a specific fault set, i.e. equivalent with the standard FI step. A simple example demonstrates how to turn the algebraic necessary and sufficient conditions into explicit algorithms for designing filter banks, which...

  6. ESR dating of the fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2005-01-01

    We carried out ESR dating of fault rocks collected near the nuclear reactor. The Upcheon fault zone is exposed close to the Ulzin nuclear reactor. The space-time pattern of fault activity on the Upcheon fault deduced from ESR dating of fault gouge can be summarised as follows : this fault zone was reactivated between fault breccia derived from Cretaceous sandstone and tertiary volcanic sedimentary rocks about 2 Ma, 1.5 Ma and 1 Ma ago. After those movements, the Upcheon fault was reactivated between Cretaceous sandstone and fault breccia zone about 800 ka ago. This fault zone was reactivated again between fault breccia derived form Cretaceous sandstone and Tertiary volcanic sedimentary rocks about 650 ka and after 125 ka ago. These data suggest that the long-term(200-500 k.y.) cyclic fault activity of the Upcheon fault zone continued into the Pleistocene. In the Ulzin area, ESR dates from the NW and EW trend faults range from 800 ka to 600 ka NE and EW trend faults were reactivated about between 200 ka and 300 ka ago. On the other hand, ESR date of the NS trend fault is about 400 ka and 50 ka. Results of this research suggest the fault activity near the Ulzin nuclear reactor fault activity continued into the Pleistocene. One ESR date near the Youngkwang nuclear reactor is 200 ka

  7. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  8. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  9. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  10. COMUNALIDAD Y BUEN VIVIR COMO ESTRATEGIAS INDÍGENAS FRENTE A LA VIOLENCIA EN MICHOACÁN: LOS CASOS DE CHERÁN Y SAN MIGUEL DE AQUILA

    Directory of Open Access Journals (Sweden)

    Josefina María Cendejas

    2015-06-01

    Full Text Available La escalada de violencia en la que vive el estado occidental de Michoacán, México, desde 2006, ha afectado con especial virulencia a las regiones de la Tierra Caliente, la Sierra Costa y la Meseta Purépecha. En este artículo se abordan dos casos de comunidades indígenas, una purépecha: Cherán, y una nahua: San Miguel de Aquila. Se describen y comparan las respuestas de ambas ante los embates del crimen organizado, en busca de los elementos que expliquen los resultados dramáticamente distintos que han obtenido a partir de sus respectivas iniciativas de respuesta colectiva ante la violencia. El enfoque desde la ecología política permite analizar la problemática de ambos casos como resultado del «asalto global a los bienes comunes», mientras que las nociones de comunalidad y buen vivir resultan pertinentes para identificar las fortalezas, las debilidades y las posibles consecuencias a futuro de los movimientos sociales. COMMUNALITY AND BUEN VIVIR AS INDIGENOUS STRATEGIES TO FACE VIOLENCE IN MICHOACAN: THE CASES OF CHERÁN AND SAN MIGUEL DE AQUILA The escalation of violence experienced since 2006 in the Western state of Michoacan, Mexico, has significantly affected the regions of Tierra Caliente, Sierra Costa and Meseta Purépecha. This article addresses two cases of indigenous communities, a Purepecha community in Cherán, and a Nahua community in San Miguel de Aquila. The collective responses of these two communities to the attacks of organized crime are described and compared in search of elements to explain the dramatically different results obtained by both communities. An approach from the perspective of political ecology allows for an analysis of the issues faced by each one of them as a result of the «global assault on common goods». The notions of comunalidad and buen vivir ‘good living’ are germane to an identification of strengths, weaknesses and possible future consequences of the social movements.

  11. Arc fault detection system

    Science.gov (United States)

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  12. Arc fault detection system

    Science.gov (United States)

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  13. Probabilistic assessment of faults

    International Nuclear Information System (INIS)

    Foden, R.W.

    1987-01-01

    Probabilistic safety analysis (PSA) is the process by which the probability (or frequency of occurrence) of reactor fault conditions which could lead to unacceptable consequences is assessed. The basic objective of a PSA is to allow a judgement to be made as to whether or not the principal probabilistic requirement is satisfied. It also gives insights into the reliability of the plant which can be used to identify possible improvements. This is explained in the article. The scope of a PSA and the PSA performed by the National Nuclear Corporation (NNC) for the Heysham II and Torness AGRs and Sizewell-B PWR are discussed. The NNC methods for hazards, common cause failure and operator error are mentioned. (UK)

  14. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    Science.gov (United States)

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  15. Absolute age determination of quaternary faults

    International Nuclear Information System (INIS)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik

    2000-03-01

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results

  16. Absolute age determination of quaternary faults

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik [Korea Basic Science Institute, Seoul (Korea, Republic of)] (and others)

    2000-03-15

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results.

  17. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    International Nuclear Information System (INIS)

    Cumbest, R.J.

    2000-01-01

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

  18. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Lindhe, Andreas; Norberg, Tommy; Rosén, Lars

    2012-01-01

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  19. Subaru FATS (fault tracking system)

    Science.gov (United States)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  20. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Science.gov (United States)

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  1. Architecture of thrust faults with alongstrike variations in fault-plane dip: anatomy of the Lusatian Fault, Bohemian Massif

    Czech Academy of Sciences Publication Activity Database

    Coubal, Miroslav; Adamovič, Jiří; Málek, Jiří; Prouza, V.

    2014-01-01

    Roč. 59, č. 3 (2014), s. 183-208 ISSN 1802-6222 Institutional support: RVO:67985831 ; RVO:67985891 Keywords : fault architecture * fault plane geometry * drag structures * thrust fault * sandstone * Lusatian Fault Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.405, year: 2014

  2. Application of subset simulation methods to dynamic fault tree analysis

    International Nuclear Information System (INIS)

    Liu Mengyun; Liu Jingquan; She Ding

    2015-01-01

    Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)

  3. Quantile arithmetic methodology for uncertainty propagation in fault trees

    International Nuclear Information System (INIS)

    Abdelhai, M.; Ragheb, M.

    1986-01-01

    A methodology based on quantile arithmetic, the probabilistic analog to interval analysis, is proposed for the computation of uncertainties propagation in fault tree analysis. The basic events' continuous probability density functions (pdf's) are represented by equivalent discrete distributions by dividing them into a number of quantiles N. Quantile arithmetic is then used to performthe binary arithmetical operations corresponding to the logical gates in the Boolean expression of the top event expression of a given fault tree. The computational advantage of the present methodology as compared with the widely used Monte Carlo method was demonstrated for the cases of summation of M normal variables through the efficiency ratio defined as the product of the labor and error ratios. The efficiency ratio values obtained by the suggested methodology for M = 2 were 2279 for N = 5, 445 for N = 25, and 66 for N = 45 when compared with the results for 19,200 Monte Carlo samples at the 40th percentile point. Another advantage of the approach is that the exact analytical value of the median is always obtained for the top event

  4. Ring faults and ring dikes around the Orientale basin on the Moon.

    Science.gov (United States)

    Andrews-Hanna, Jeffrey C; Head, James W; Johnson, Brandon; Keane, James T; Kiefer, Walter S; McGovern, Patrick J; Neumann, Gregory A; Wieczorek, Mark A; Zuber, Maria T

    2018-08-01

    The Orientale basin is the youngest and best-preserved multiring impact basin on the Moon, having experienced only modest modification by subsequent impacts and volcanism. Orientale is often treated as the type example of a multiring basin, with three prominent rings outside of the inner depression: the Inner Rook Montes, the Outer Rook Montes, and the Cordillera. Here we use gravity data from NASA's Gravity Recovery and Interior Laboratory (GRAIL) mission to reveal the subsurface structure of Orientale and its ring system. Gradients of the gravity data reveal a continuous ring dike intruded into the Outer Rook along the plane of the fault associated with the ring scarp. The volume of this ring dike is ~18 times greater than the volume of all extrusive mare deposits associated with the basin. The gravity gradient signature of the Cordillera ring indicates an offset along the fault across a shallow density interface, interpreted to be the base of the low-density ejecta blanket. Both gravity gradients and crustal thickness models indicate that the edge of the central cavity is shifted inward relative to the equivalent Inner Rook ring at the surface. Models of the deep basin structure show inflections along the crust-mantle interface at both the Outer Rook and Cordillera rings, indicating that the basin ring faults extend from the surface to at least the base of the crust. Fault dips range from 13-22° for the Cordillera fault in the northeastern quadrant, to 90° for the Outer Rook in the northwestern quadrant. The fault dips for both outer rings are lowest in the northeast, possibly due to the effects of either the direction of projectile motion or regional gradients in pre-impact crustal thickness. Similar ring dikes and ring faults are observed around the majority of lunar basins.

  5. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  6. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  7. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  8. 20 CFR 410.561b - Fault.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see § 410...

  9. Fault Detection for Diesel Engine Actuator

    DEFF Research Database (Denmark)

    Blanke, M.; Bøgh, S.A.; Jørgensen, R.B.

    1994-01-01

    Feedback control systems are vulnerable to faults in control loop sensors and actuators, because feedback actions may cause abrupt responses and process damage when faults occur.......Feedback control systems are vulnerable to faults in control loop sensors and actuators, because feedback actions may cause abrupt responses and process damage when faults occur....

  10. 22 CFR 17.3 - Fault.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the individual...

  11. Active fault diagnosis by temporary destabilization

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    An active fault diagnosis method for parametric or multiplicative faults is proposed. The method periodically adds a term to the controller that for a short period of time renders the system unstable if a fault has occurred, which facilitates rapid fault detection. An illustrative example is given....

  12. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  13. Differential Fault Analysis on CLEFIA

    Science.gov (United States)

    Chen, Hua; Wu, Wenling; Feng, Dengguo

    CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

  14. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  15. Cell boundary fault detection system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  16. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  17. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  18. Qademah Fault Passive Data

    KAUST Repository

    Hanafy, Sherif M.

    2014-01-01

    OBJECTIVE: In this field trip we collect passive data to 1. Convert passive to surface waves 2. Locate Qademah fault using surface wave migration INTRODUCTION: In this field trip we collected passive data for several days. This data will be used to find the surface waves using interferometry and then compared to active-source seismic data collected at the same location. A total of 288 receivers are used. A 3D layout with 5 m inline intervals and 10 m cross line intervals is used, where we used 12 lines with 24 receivers at each line. You will need to download the file (rec_times.mat), it contains important information about 1. Field record no 2. Record day 3. Record month 4. Record hour 5. Record minute 6. Record second 7. Record length P.S. 1. All files are converted from original format (SEG-2) to matlab format P.S. 2. Overlaps between records (10 to 1.5 sec.) are already removed from these files

  19. Exposing the faults

    International Nuclear Information System (INIS)

    Richardson, P.J.

    1989-01-01

    UK NIREX, the body with responsibility for finding an acceptable strategy for deposition of radioactive waste has given the impression throughout its recent public consultation that the problem of nuclear waste is one of public and political acceptability, rather than one of a technical nature. However the results of the consultation process show that it has no mandate from the British public to develop a single, national, deep repository for the burial of radioactive waste. There is considerable opposition to this method of managing radioactive waste and suspicion of the claims by NIREX concerning the supposed integrity and safety of this deep burial option. This report gives substance to those suspicions and details the significant areas of uncertainty in the concept of effective geological containment of hazardous radioactive elements, which remain dangerous for tens of thousands of years. Because the science of geology is essentially retrospective rather than predictive, NIREX's plans for a single, national, deep 'repository' depend heavily upon a wide range of assumptions about the geological and hydrogeological regimes in certain areas of the UK. This report demonstrates that these assumptions are based on a limited understanding of UK geology and on unvalidated and simplistic theoretical models of geological processes, the performance of which can never be directly tested over the long time-scales involved. NIREX's proposals offer no guarantees for the safe and effective containment of radioactivity. They are deeply flawed. This report exposes the faults. (author)

  20. Fault-tolerant rotary actuator

    Science.gov (United States)

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  1. Static Decoupling in fault detection

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    1998-01-01

    An algebraic approach is given for a design of a static residual weighting factor in connection with fault detection. A complete parameterization is given of the weighting factor which will minimize a given performance index......An algebraic approach is given for a design of a static residual weighting factor in connection with fault detection. A complete parameterization is given of the weighting factor which will minimize a given performance index...

  2. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  3. The relation of catastrophic flooding of Mangala Valles, Mars, to faulting of Memnonia Fossae and Tharsis volcanism

    International Nuclear Information System (INIS)

    Tanaka, K.L.; Chapman, M.G.

    1990-01-01

    Detailed stratigraphic relations indicate two coeval periods of catastrophic flooding and Tharsis-centered faulting (producing Memnonia Fossae) in the Mangala Valles region of Mars. Major sequences of lava flows of the Tharsis Montes Formation and local, lobate plains flows were erupted during and between these channeling and faulting episodes. First, Late Hesperian channel development overlapped in time the Tharsis-centered faulting that trends north 75 degree to 90 degree E. Next, Late Hesperian/Early Amazonian flooding was coeval with faulting that trends north 55 degree to 70 degree E. In some reaches, resistant lava flows filled the early channels, resulting in inverted channel topography after the later flooding swept through. Both floods likely originated from the same graben, which probably was activated during each episode of faulting. Faulting broke through groundwater barriers and tapped confined aquifers in higher regions west and east of the point of discharge. The minimum volume of water required to erode Mangala Valles (about 5 x 10 12 m 3 ) may have been released through two floods that drained a few percent pore volume from a relatively permeable aquifer. The peak discharges of the floods may have lasted from days to weeks. The perched water discharged from the aquifer may have been produced by hydrothermal groundwater circulation induced by Tharsis magmatism, tectonic uplift centered at Tharsis Montes, and compacting of saturated crater ejecta due to loading by lava flows

  4. Aeromagnetic anomalies over faulted strata

    Science.gov (United States)

    Grauch, V.J.S.; Hudson, Mark R.

    2011-01-01

    High-resolution aeromagnetic surveys are now an industry standard and they commonly detect anomalies that are attributed to faults within sedimentary basins. However, detailed studies identifying geologic sources of magnetic anomalies in sedimentary environments are rare in the literature. Opportunities to study these sources have come from well-exposed sedimentary basins of the Rio Grande rift in New Mexico and Colorado. High-resolution aeromagnetic data from these areas reveal numerous, curvilinear, low-amplitude (2–15 nT at 100-m terrain clearance) anomalies that consistently correspond to intrasedimentary normal faults (Figure 1). Detailed geophysical and rock-property studies provide evidence for the magnetic sources at several exposures of these faults in the central Rio Grande rift (summarized in Grauch and Hudson, 2007, and Hudson et al., 2008). A key result is that the aeromagnetic anomalies arise from the juxtaposition of magnetically differing strata at the faults as opposed to chemical processes acting at the fault zone. The studies also provide (1) guidelines for understanding and estimating the geophysical parameters controlling aeromagnetic anomalies at faulted strata (Grauch and Hudson), and (2) observations on key geologic factors that are favorable for developing similar sedimentary sources of aeromagnetic anomalies elsewhere (Hudson et al.).

  5. Passive fault current limiting device

    Science.gov (United States)

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  6. Permeability - Fluid Pressure - Stress Relationship in Fault Zones in Shales

    Science.gov (United States)

    Henry, P.; Guglielmi, Y.; Morereau, A.; Seguy, S.; Castilla, R.; Nussbaum, C.; Dick, P.; Durand, J.; Jaeggi, D.; Donze, F. V.; Tsopela, A.

    2016-12-01

    Fault permeability is known to depend strongly on stress and fluid pressures. Exponential relationships between permeability and effective pressure have been proposed to approximate fault response to fluid pressure variations. However, the applicability of these largely empirical laws remains questionable, as they do not take into account shear stress and shear strain. A series of experiments using mHPP probes have been performed within fault zones in very low permeability (less than 10-19 m2) Lower Jurassic shale formations at Tournemire (France) and Mont Terri (Switzerland) underground laboratories. These probes allow to monitor 3D displacement between two points anchored to the borehole walls at the same time as fluid pressure and flow rate. In addition, in the Mont-Terri experiment, passive pressure sensors were installed in observation boreholes. Fracture transmissivity was estimated from single borehole pulse test, constant pressure injection tests, and cross-hole tests. It is found that the transmissivity-pressure dependency can be approximated with an exponential law, but only above a pressure threshold that we call the Fracture Opening Threshold (F.O.P). The displacement data show a change of the mechanical response across the F.O.P. The displacement below the F.O.P. is dominated by borehole response, which is mostly elastic. Above F.O.P., the poro-elasto-plastic response of the fractures dominates. Stress determinations based on previous work and on the analysis of slip data from mHPPP probe indicate that the F.O.P. is lower than the least principal stress. Below the F.O.P., uncemented fractures retain some permeability, as pulse tests performed at low pressures yield diffusivities in the range 10-2 to 10-5 m2/s. Overall, this dual behavior appears consistent with the results of CORK experiments performed in accretionary wedge decollements. Results suggest (1) that fault zones become highly permeable when approaching the critical Coulomb threshold (2

  7. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    Full Text Available Recent deformation processes taking place in real time are analyzed on the basis of data on fault zones which were collected by long-term detailed geodetic survey studies with application of field methods and satellite monitoring.A new category of recent crustal movements is described and termed as parametrically induced tectonic strain in fault zones. It is shown that in the fault zones located in seismically active and aseismic regions, super intensive displacements of the crust (5 to 7 cm per year, i.e. (5 to 7·10–5 per year occur due to very small external impacts of natural or technogenic / industrial origin.The spatial discreteness of anomalous deformation processes is established along the strike of the regional Rechitsky fault in the Pripyat basin. It is concluded that recent anomalous activity of the fault zones needs to be taken into account in defining regional regularities of geodynamic processes on the basis of real-time measurements.The paper presents results of analyses of data collected by long-term (20 to 50 years geodetic surveys in highly seismically active regions of Kopetdag, Kamchatka and California. It is evidenced by instrumental geodetic measurements of recent vertical and horizontal displacements in fault zones that deformations are ‘paradoxically’ deviating from the inherited movements of the past geological periods.In terms of the recent geodynamics, the ‘paradoxes’ of high and low strain velocities are related to a reliable empirical fact of the presence of extremely high local velocities of deformations in the fault zones (about 10–5 per year and above, which take place at the background of slow regional deformations which velocities are lower by the order of 2 to 3. Very low average annual velocities of horizontal deformation are recorded in the seismic regions of Kopetdag and Kamchatka and in the San Andreas fault zone; they amount to only 3 to 5 amplitudes of the earth tidal deformations per year.A ‘fault

  8. Strategije drevesnega preiskovanja Monte Carlo

    OpenAIRE

    VODOPIVEC, TOM

    2018-01-01

    Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...

  9. Field characterization of elastic properties across a fault zone reactivated by fluid injection

    Science.gov (United States)

    Jeanne, Pierre; Guglielmi, Yves; Rutqvist, Jonny; Nussbaum, Christophe; Birkholzer, Jens

    2017-08-01

    We studied the elastic properties of a fault zone intersecting the Opalinus Clay formation at 300 m depth in the Mont Terri Underground Research Laboratory (Switzerland). Four controlled water injection experiments were performed in borehole straddle intervals set at successive locations across the fault zone. A three-component displacement sensor, which allowed capturing the borehole wall movements during injection, was used to estimate the elastic properties of representative locations across the fault zone, from the host rock to the damage zone to the fault core. Young's moduli were estimated by both an analytical approach and numerical finite difference modeling. Results show a decrease in Young's modulus from the host rock to the damage zone by a factor of 5 and from the damage zone to the fault core by a factor of 2. In the host rock, our results are in reasonable agreement with laboratory data showing a strong elastic anisotropy characterized by the direction of the plane of isotropy parallel to the laminar structure of the shale formation. In the fault zone, strong rotations of the direction of anisotropy can be observed. The plane of isotropy can be oriented either parallel to bedding (when few discontinuities are present), parallel to the direction of the main fracture family intersecting the zone, and possibly oriented parallel or perpendicular to the fractures critically oriented for shear reactivation (when repeated past rupture along this plane has created a zone).

  10. Fault Management Guiding Principles

    Science.gov (United States)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  11. Data-driven fault mechanics: Inferring fault hydro-mechanical properties from in situ observations of injection-induced aseismic slip

    Science.gov (United States)

    Bhattacharya, P.; Viesca, R. C.

    2017-12-01

    In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent

  12. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  13. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  14. Radial basis function neural network in fault detection of automotive ...

    African Journals Online (AJOL)

    Radial basis function neural network in fault detection of automotive engines. ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults ... Keywords: Automotive engine, independent RBFNN model, RBF neural network, fault detection

  15. Active faulting in the central Betic Cordillera (Spain): Palaeoseismological constraint of the surface-rupturing history of the Baza Fault (Central Betic Cordillera, Iberian Peninsula)

    Science.gov (United States)

    Castro, J.; Martin-Rojas, I.; Medina-Cascales, I.; García-Tortosa, F. J.; Alfaro, P.; Insua-Arévalo, J. M.

    2018-06-01

    This paper on the Baza Fault provides the first palaeoseismic data from trenches in the central sector of the Betic Cordillera (S Spain), one of the most tectonically active areas of the Iberian Peninsula. With the palaeoseismological data we constructed time-stratigraphic OxCal models that yield probability density functions (PDFs) of individual palaeoseismic event timing. We analysed PDF overlap to quantitatively correlate the walls and site events into a single earthquake chronology. We assembled a surface-rupturing history of the Baza Fault for the last ca. 45,000 years. We postulated six alternative surface rupturing histories including 8-9 fault-wide earthquakes. We calculated fault-wide earthquake recurrence intervals using Monte Carlo. This analysis yielded a 4750-5150 yr recurrence interval. Finally, compared our results with the results from empirical relationships. Our results will provide a basis for future analyses of more of other active normal faults in this region. Moreover, our results will be essential for improving earthquake-probability assessments in Spain, where palaeoseismic data are scarce.

  16. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  17. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  18. Exact Monte Carlo for molecules

    International Nuclear Information System (INIS)

    Lester, W.A. Jr.; Reynolds, P.J.

    1985-03-01

    A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs

  19. A data-driven multiplicative fault diagnosis approach for automation processes.

    Science.gov (United States)

    Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo

    2014-09-01

    This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.

  20. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  1. Monte Carlo simulations and benchmark studies at CERN's accelerator chain

    CERN Document Server

    AUTHOR|(CDS)2083190; Brugger, Markus

    2016-01-01

    Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.

  2. Application of fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, A.

    2007-11-30

    This report presents the results of a study commissioned by the Department for Business, Enterprise and Industry (BERR; formerly the Department of Trade and Industry) into the application of fault current limiters in the UK. The study reviewed the current state of fault current limiter (FCL) technology and regulatory position in relation to all types of current limiters. It identified significant research and development work with respect to medium voltage FCLs and a move to high voltage. Appropriate FCL technologies being developed include: solid state breakers; superconducting FCLs (including superconducting transformers); magnetic FCLs; and active network controllers. Commercialisation of these products depends on successful field tests and experience, plus material development in the case of high temperature superconducting FCL technologies. The report describes FCL techniques, the current state of FCL technologies, practical applications and future outlook for FCL technologies, distribution fault level analysis and an outline methodology for assessing the materiality of the fault level problem. A roadmap is presented that provides an 'action agenda' to advance the fault level issues associated with low carbon networks.

  3. Fault trees for diagnosis of system fault conditions

    International Nuclear Information System (INIS)

    Lambert, H.E.; Yadigaroglu, G.

    1977-01-01

    Methods for generating repair checklists on the basis of fault tree logic and probabilistic importance are presented. A one-step-ahead optimization procedure, based on the concept of component criticality, minimizing the expected time to diagnose system failure is outlined. Options available to the operator of a nuclear power plant when system fault conditions occur are addressed. A low-pressure emergency core cooling injection system, a standby safeguard system of a pressurized water reactor power plant, is chosen as an example illustrating the methods presented

  4. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    Science.gov (United States)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  5. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  6. Fault-tolerant architecture: Evaluation methodology

    International Nuclear Information System (INIS)

    Battle, R.E.; Kisner, R.A.

    1992-08-01

    The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems

  7. Fault Isolation for Shipboard Decision Support

    DEFF Research Database (Denmark)

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2010-01-01

    Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation of a containe......Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation...... to the quality of decisions given to navigators....

  8. An architecture for fault tolerant controllers

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2005-01-01

    degradation in the sense of guaranteed degraded performance. A number of fault diagnosis problems, fault tolerant control problems, and feedback control with fault rejection problems are formulated/considered, mainly from a fault modeling point of view. The method is illustrated on a servo example including......A general architecture for fault tolerant control is proposed. The architecture is based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBK parameterization to quantify the performance of the fault tolerant system. The approach suggested can be applied...

  9. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  10. Integrated fault tree development environment

    International Nuclear Information System (INIS)

    Dixon, B.W.

    1986-01-01

    Probabilistic Risk Assessment (PRA) techniques are utilized in the nuclear industry to perform safety analyses of complex defense-in-depth systems. A major effort in PRA development is fault tree construction. The Integrated Fault Tree Environment (IFTREE) is an interactive, graphics-based tool for fault tree design. IFTREE provides integrated building, editing, and analysis features on a personal workstation. The design philosophy of IFTREE is presented, and the interface is described. IFTREE utilizes a unique rule-based solution algorithm founded in artificial intelligence (AI) techniques. The impact of the AI approach on the program design is stressed. IFTREE has been developed to handle the design and maintenance of full-size living PRAs and is currently in use

  11. Update: San Andreas Fault experiment

    Science.gov (United States)

    Christodoulidis, D. C.; Smith, D. E.

    1984-01-01

    Satellite laser ranging techniques are used to monitor the broad motion of the tectonic plates comprising the San Andreas Fault System. The San Andreas Fault Experiment, (SAFE), has progressed through the upgrades made to laser system hardware and an improvement in the modeling capabilities of the spaceborne laser targets. Of special note is the launch of the Laser Geodynamic Satellite, LAGEOS spacecraft, NASA's only completely dedicated laser satellite in 1976. The results of plate motion projected into this 896 km measured line over the past eleven years are summarized and intercompared.

  12. Stress and Burnout in Health-Care Workers after the 2009 L’Aquila Earthquake: A Cross-Sectional Observational Study

    Science.gov (United States)

    Mattei, Antonella; Fiasca, Fabiana; Mazzei, Mariachiara; Necozione, Stefano; Bianchini, Valeria

    2017-01-01

    Burnout is a work-related mental health impairment, which is now recognized as a real problem in the context of the helping professions due to its adverse health outcomes on efficiency. To our knowledge, the literature on the postdisaster scenario in Italy is limited by a focus on mental health professionals rather than other health-care workers. Our cross-sectional study aims to evaluate the prevalence of burnout and psychopathological distress in different categories of health-care workers, i.e., physicians, nurses, and health-care assistants, working in different departments of L’Aquila St. Salvatore General Hospital 6 years after the 2009 earthquake in order to prevent and reduce work-related burnout. With a two-stage cluster sampling, a total of 8 departments out of a total of 28 departments were selected and the total sample included 300 health-care workers. All the participants completed the following self-reporting questionnaires: a sociodemographic data form, a Maslach Burnout Inventory and a General Health Questionnaire 12 Items (GHQ-12). Statistically significant differences emerged between the total scores of the GHQ-12: post hoc analysis showed that the total average scores of the GHQ-12 were significantly higher in doctors than in health-care assistants. A high prevalence of burnout among doctors (25.97%) emerged. Using multivariate analysis, we identified a hostile relationship with colleagues, direct exposure to the L’Aquila earthquake and moderate to high levels of distress as being burnout predictors. Investigating the prevalence of burnout and distress in health-care staff in a postdisaster setting and identifying predictors of burnout development such as stress levels, time-management skills and work-life balance will contribute to the development of preventative strategies and better organization at work with a view to improving public health efficacy and reducing public health costs, given that these workers live in the disaster

  13. Stress and Burnout in Health-Care Workers after the 2009 L'Aquila Earthquake: A Cross-Sectional Observational Study.

    Science.gov (United States)

    Mattei, Antonella; Fiasca, Fabiana; Mazzei, Mariachiara; Necozione, Stefano; Bianchini, Valeria

    2017-01-01

    Burnout is a work-related mental health impairment, which is now recognized as a real problem in the context of the helping professions due to its adverse health outcomes on efficiency. To our knowledge, the literature on the postdisaster scenario in Italy is limited by a focus on mental health professionals rather than other health-care workers. Our cross-sectional study aims to evaluate the prevalence of burnout and psychopathological distress in different categories of health-care workers, i.e., physicians, nurses, and health-care assistants, working in different departments of L'Aquila St. Salvatore General Hospital 6 years after the 2009 earthquake in order to prevent and reduce work-related burnout. With a two-stage cluster sampling, a total of 8 departments out of a total of 28 departments were selected and the total sample included 300 health-care workers. All the participants completed the following self-reporting questionnaires: a sociodemographic data form, a Maslach Burnout Inventory and a General Health Questionnaire 12 Items (GHQ-12). Statistically significant differences emerged between the total scores of the GHQ-12: post hoc analysis showed that the total average scores of the GHQ-12 were significantly higher in doctors than in health-care assistants. A high prevalence of burnout among doctors (25.97%) emerged. Using multivariate analysis, we identified a hostile relationship with colleagues, direct exposure to the L'Aquila earthquake and moderate to high levels of distress as being burnout predictors. Investigating the prevalence of burnout and distress in health-care staff in a postdisaster setting and identifying predictors of burnout development such as stress levels, time-management skills and work-life balance will contribute to the development of preventative strategies and better organization at work with a view to improving public health efficacy and reducing public health costs, given that these workers live in the disaster

  14. Stress and Burnout in Health-Care Workers after the 2009 L’Aquila Earthquake: A Cross-Sectional Observational Study

    Directory of Open Access Journals (Sweden)

    Antonella Mattei

    2017-06-01

    Full Text Available Burnout is a work-related mental health impairment, which is now recognized as a real problem in the context of the helping professions due to its adverse health outcomes on efficiency. To our knowledge, the literature on the postdisaster scenario in Italy is limited by a focus on mental health professionals rather than other health-care workers. Our cross-sectional study aims to evaluate the prevalence of burnout and psychopathological distress in different categories of health-care workers, i.e., physicians, nurses, and health-care assistants, working in different departments of L’Aquila St. Salvatore General Hospital 6 years after the 2009 earthquake in order to prevent and reduce work-related burnout. With a two-stage cluster sampling, a total of 8 departments out of a total of 28 departments were selected and the total sample included 300 health-care workers. All the participants completed the following self-reporting questionnaires: a sociodemographic data form, a Maslach Burnout Inventory and a General Health Questionnaire 12 Items (GHQ-12. Statistically significant differences emerged between the total scores of the GHQ-12: post hoc analysis showed that the total average scores of the GHQ-12 were significantly higher in doctors than in health-care assistants. A high prevalence of burnout among doctors (25.97% emerged. Using multivariate analysis, we identified a hostile relationship with colleagues, direct exposure to the L’Aquila earthquake and moderate to high levels of distress as being burnout predictors. Investigating the prevalence of burnout and distress in health-care staff in a postdisaster setting and identifying predictors of burnout development such as stress levels, time-management skills and work-life balance will contribute to the development of preventative strategies and better organization at work with a view to improving public health efficacy and reducing public health costs, given that these workers live in the

  15. Developing Engineered Fuel (Briquettes) Using Fly Ash from the Aquila Coal-Fired Power Plant in Canon City and Locally Available Biomass Waste

    Energy Technology Data Exchange (ETDEWEB)

    H. Carrasco; H. Sarper

    2006-06-30

    The objective of this research is to explore the feasibility of producing engineered fuels from a combination of renewable and non renewable energy sources. The components are flyash (containing coal fines) and locally available biomass waste. The constraints were such that no other binder additives were to be added. Listed below are the main accomplishments of the project: (1) Determination of the carbon content of the flyash sample from the Aquila plant. It was found to be around 43%. (2) Experiments were carried out using a model which simulates the press process of a wood pellet machine, i.e. a bench press machine with a close chamber, to find out the ideal ratio of wood and fly ash to be mixed to get the desired briquette. The ideal ratio was found to have 60% wood and 40% flyash. (3) The moisture content required to produce the briquettes was found to be anything below 5.8%. (4) The most suitable pressure required to extract the lignin form the wood and cause the binding of the mixture was determined to be 3000psi. At this pressure, the briquettes withstood an average of 150psi on its lateral side. (5) An energy content analysis was performed and the BTU content was determined to be approximately 8912 BTU/lb. (6) The environmental analysis was carried out and no abnormalities were noted. (7) Industrial visits were made to pellet manufacturing plants to investigate the most suitable manufacturing process for the briquettes. (8) A simulation model of extrusion process was developed to explore the possibility of using a cattle feed plant operating on extrusion process to produce briquettes. (9) Attempt to produce 2 tons of briquettes was not successful. The research team conducted a trial production run at a Feed Mill in La Junta, CO to produce two (2) tons of briquettes using the extrusion process in place. The goal was to, immediately after producing the briquettes; send them through Aquila's current system to test the ability of the briquettes to flow

  16. Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults

    Science.gov (United States)

    Keener, Charles; Serpa, Laura; Pavlis, Terry L.

    1993-04-01

    New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

  17. Geomechanical behaviour of Opalinus Clay at multiple scales: results from Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Amann, F.; Wild, K.M.; Loew, S. [Institute of Geology, Engineering Geology, Swiss Federal Institute of Technology, Zurich (Switzerland); Yong, S. [Knight Piesold Ltd, Vancouver (Canada); Thoeny, R. [Grundwasserschutz und Entsorgung, AF-Consult Switzerland AG, Baden (Switzerland); Frank, E. [Sektion Geologie (GEOL), Eidgenössisches Nuklear-Sicherheitsinspektorat (ENSI), Brugg (Switzerland)

    2017-04-15

    The paper represents a summary about our research projects conducted between 2003 and 2015 related to the mechanical behaviour of Opalinus Clay at Mont Terri. The research summarized covers a series of laboratory and field tests that address the brittle failure behaviour of Opalinus Clay, its undrained and effective strength, the dependency of petro-physical and mechanical properties on total suction, hydro-mechanically coupled phenomena and the development of a damage zone around excavations. On the laboratory scale, even simple laboratory tests are difficult to interpret and uncertainties remain regarding the representativeness of the results. We show that suction may develop rapidly after core extraction and substantially modifies the strength, stiffness, and petro-physical properties of Opalinus Clay. Consolidated undrained tests performed on fully saturated specimens revealed a relatively small true cohesion and confirmed the strong hydro-mechanically coupled behaviour of this material. Strong hydro-mechanically coupled processes may explain the stability of cores and tunnel excavations in the short term. Pore-pressure effects may cause effective stress states that favour stability in the short term but may cause longer-term deformations and damage as the pore-pressure dissipates. In-situ observations show that macroscopic fracturing is strongly influenced by bedding planes and faults planes. In tunnel sections where opening or shearing along bedding planes or faults planes is kinematically free, the induced fracture type is strongly dependent on the fault plane frequency and orientation. A transition from extensional macroscopic failure to shearing can be observed with increasing fault plane frequency. In zones around the excavation where bedding plane shearing/shearing along tectonic fault planes is kinematically restrained, primary extensional type fractures develop. In addition, heterogeneities such as single tectonic fault planes or fault zones

  18. Fault-tolerant system for catastrophic faults in AMR sensors

    NARCIS (Netherlands)

    Zambrano Constantini, A.C.; Kerkhoff, Hans G.

    Anisotropic Magnetoresistance angle sensors are widely used in automotive applications considered to be safety-critical applications. Therefore dependability is an important requirement and fault-tolerant strategies must be used to guarantee the correct operation of the sensors even in case of

  19. New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-01

    New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion Hoby Njara Tendrisoa Razafindrakoto Earthquake source inversion is a non-linear problem that leads to non-unique solutions. The aim of this dissertation is to understand the uncertainty and reliability in earthquake source inversion, as well as to quantify variability in earthquake rupture models. The source inversion is performed using a Bayesian inference. This technique augments optimization approaches through its ability to image the entire solution space which is consistent with the data and prior information. In this study, the uncertainty related to the choice of source-time function and crustal structure is investigated. Three predefined analytical source-time functions are analyzed; isosceles triangle, Yoffe with acceleration time of 0.1 and 0.3 s. The use of the isosceles triangle as source-time function is found to bias the finite-fault source inversion results. It accelerates the rupture to propagate faster compared to that of the Yoffe function. Moreover, it generates an artificial linear correlation between parameters that does not exist for the Yoffe source-time functions. The effect of inadequate knowledge of Earth’s crustal structure in earthquake rupture models is subsequently investigated. The results show that one-dimensional structure variability leads to parameters resolution changes, with a broadening of the posterior 5 PDFs and shifts in the peak location. These changes in the PDFs of kinematic parameters are associated with the blurring effect of using incorrect Earth structure. As an application to real earthquake, finite-fault source models for the 2009 L’Aquila earthquake are examined using one- and three-dimensional crustal structures. One- dimensional structure is found to degrade the data fitting. However, there is no significant effect on the rupture parameters aside from differences in the spatial slip extension. Stable features are maintained for both

  20. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  1. Monte Carlo Methods in ICF

    Science.gov (United States)

    Zimmerman, George B.

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  2. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics

  3. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, George B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials

  4. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  5. A contribution Monte Carlo method

    International Nuclear Information System (INIS)

    Aboughantous, C.H.

    1994-01-01

    A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time

  6. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  7. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  8. Elements of Monte Carlo techniques

    International Nuclear Information System (INIS)

    Nagarajan, P.S.

    2000-01-01

    The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc

  9. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  10. Geometrical splitting in Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Elperin, T.; Dudziak, D.J.

    1982-01-01

    A statistical model is presented by which a direct statistical approach yielded an analytic expression for the second moment, the variance ratio, and the benefit function in a model of an n surface-splitting Monte Carlo game. In addition to the insight into the dependence of the second moment on the splitting parameters the main importance of the expressions developed lies in their potential to become a basis for in-code optimization of splitting through a general algorithm. Refs

  11. Extending canonical Monte Carlo methods

    International Nuclear Information System (INIS)

    Velazquez, L; Curilef, S

    2010-01-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model

  12. Non statistical Monte-Carlo

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-04-01

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  13. BREM5 electroweak Monte Carlo

    International Nuclear Information System (INIS)

    Kennedy, D.C. II.

    1987-01-01

    This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures

  14. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  15. Fault Management Assistant (FMA), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — S&K Aerospace (SKA) proposes to develop the Fault Management Assistant (FMA) to aid project managers and fault management engineers in developing better and more...

  16. SDEM modelling of fault-propagation folding

    DEFF Research Database (Denmark)

    Clausen, O.R.; Egholm, D.L.; Poulsen, Jane Bang

    2009-01-01

    and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset......Understanding the dynamics and kinematics of fault-propagation-folding is important for evaluating the associated hydrocarbon play, for accomplishing reliable section balancing (structural reconstruction), and for assessing seismic hazards. Accordingly, the deformation style of fault-propagation...... a precise indication of when faults develop and hence also the sequential evolution of secondary faults. Here we focus on the generation of a fault -propagated fold with a reverse sense of motion at the master fault, and varying only the dip of the master fault and the mechanical behaviour of the deformed...

  17. A summary of the active fault investigation in the extension sea area of Kikugawa fault and the Nishiyama fault , N-S direction fault in south west Japan

    Science.gov (United States)

    Abe, S.

    2010-12-01

    In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we

  18. 31 CFR 29.522 - Fault.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at fault...

  19. H infinity Integrated Fault Estimation and Fault Tolerant Control of Discrete-time Piecewise Linear Systems

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Bak, Thomas

    2012-01-01

    In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then, the es...

  20. Cell boundary fault detection system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2011-04-19

    An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  1. RESULTS, RESPONSIBILITY, FAULT AND CONTROL

    Directory of Open Access Journals (Sweden)

    Evgeniy Stoyanov

    2016-09-01

    Full Text Available The paper focuses on the responsibility arising from the registered financial results. The analysis of this responsibility presupposes its evaluation and determination of the role of fault in the formation of negative results. The search for efficiency in this whole process is justified by the understanding of the mechanisms that regulate the behavior of economic actors.

  2. Fault detection using (PI) observers

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.; Shafai, B.

    1997-01-01

    The fault detection and isolation (FDI) problem in connection with Proportional Integral (PI) Observers is considered in this paper. A compact formulation of the FDI design problem using PI observers is given. An analysis of the FDI design problem is derived with respectt to the time domain...

  3. Exact, almost and delayed fault detection

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Saberi, Ali; Stoorvogel, Anton A.

    1999-01-01

    Considers the problem of fault detection and isolation while using zero or almost zero threshold. A number of different fault detection and isolation problems using exact or almost exact disturbance decoupling are formulated. Solvability conditions are given for the formulated design problems....... The l-step delayed fault detection problem is also considered for discrete-time systems....

  4. 5 CFR 831.1402 - Fault.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The fact...

  5. 40 CFR 258.13 - Fault areas.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in Holocene...

  6. On the "stacking fault" in copper

    NARCIS (Netherlands)

    Fransens, J.R.; Pleiter, F

    2003-01-01

    The results of a perturbed gamma-gamma angular correlations experiment on In-111 implanted into a properly cut single crystal of copper show that the defect known in the literature as "stacking fault" is not a planar faulted loop but a stacking fault tetrahedron with a size of 10-50 Angstrom.

  7. 20 CFR 255.11 - Fault.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than the...

  8. 5 CFR 845.302 - Fault.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission that...

  9. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  10. Architecting Fault-Tolerant Software Systems

    NARCIS (Netherlands)

    Sözer, Hasan

    2009-01-01

    The increasing size and complexity of software systems makes it hard to prevent or remove all possible faults. Faults that remain in the system can eventually lead to a system failure. Fault tolerance techniques are introduced for enabling systems to recover and continue operation when they are

  11. Machine Learning of Fault Friction

    Science.gov (United States)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

  12. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  13. Fault diagnosis and fault-tolerant control based on adaptive control approach

    CERN Document Server

    Shen, Qikun; Shi, Peng

    2017-01-01

    This book provides recent theoretical developments in and practical applications of fault diagnosis and fault tolerant control for complex dynamical systems, including uncertain systems, linear and nonlinear systems. Combining adaptive control technique with other control methodologies, it investigates the problems of fault diagnosis and fault tolerant control for uncertain dynamic systems with or without time delay. As such, the book provides readers a solid understanding of fault diagnosis and fault tolerant control based on adaptive control technology. Given its depth and breadth, it is well suited for undergraduate and graduate courses on linear system theory, nonlinear system theory, fault diagnosis and fault tolerant control techniques. Further, it can be used as a reference source for academic research on fault diagnosis and fault tolerant control, and for postgraduates in the field of control theory and engineering. .

  14. MCNP load balancing and fault tolerance with PVM

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1995-01-01

    Version 4A of the Monte Carlo neutron, photon, and electron transport code MCNP, developed by LANL (Los Alamos National Laboratory), supports distributed-memory multiprocessing through the software package PVM (Parallel Virtual Machine, version 3.1.4). Using PVM for interprocessor communication, MCNP can simultaneously execute a single problem on a cluster of UNIX-based workstations. This capability provided system efficiencies that exceeded 80% on dedicated workstation clusters, however, on heterogeneous or multiuser systems, the performance was limited by the slowest processor (i.e., equal work was assigned to each processor). The next public release of MCNP will provide multiprocessing enhancements that include load balancing and fault tolerance which are shown to dramatically increase multiuser system efficiency and reliability

  15. Probability intervals for the top event unavailability of fault trees

    International Nuclear Information System (INIS)

    Lee, Y.T.; Apostolakis, G.E.

    1976-06-01

    The evaluation of probabilities of rare events is of major importance in the quantitative assessment of the risk from large technological systems. In particular, for nuclear power plants the complexity of the systems, their high reliability and the lack of significant statistical records have led to the extensive use of logic diagrams in the estimation of low probabilities. The estimation of probability intervals for the probability of existence of the top event of a fault tree is examined. Given the uncertainties of the primary input data, a method is described for the evaluation of the first four moments of the top event occurrence probability. These moments are then used to estimate confidence bounds by several approaches which are based on standard inequalities (e.g., Tchebycheff, Cantelli, etc.) or on empirical distributions (the Johnson family). Several examples indicate that the Johnson family of distributions yields results which are in good agreement with those produced by Monte Carlo simulation

  16. Gender differences in the relationship between maladaptive behaviours and post-traumatic stress disorder. A study on 900 L’Aquila 2009 earthquake survivors.

    Directory of Open Access Journals (Sweden)

    Liliana eDell'Osso

    2013-01-01

    Full Text Available Background: Post-traumatic stress disorder (PTSD represents one of the most frequently psychiatric sequelae to earthquake exposure. Increasing evidence suggests the onset of maladaptive behaviors among veterans and adolescents with PTSD, with specific gender differences emerging in the latter. Aims of the present study were to investigate the relationships between maladaptive behaviours and PTSD in earthquake survivors, besides the gender differences in the type and prevalence of maladaptive behaviours and their association with PTSD. Methods: 900 residents of the town of L’Aquila who experienced the earthquake of April 6th 2009 (Richter Magnitude 6.3 were assessed by means of the Trauma and Loss Spectrum Self Report (TALS-SR.Results: Significantly higher maladaptive behaviour prevalence rates were found among subjects with PTSD. A statistically significant association was found between male gender and the presence of at least one maladaptive behaviour among PTSD survivors. In the latter, significant correlations emerged between maladaptive coping and symptoms of re-experiencing, avoidance and numbing and arousal in women, while only between maladaptive coping and avoidance and numbing in men. Conclusions: Our results show high rates of maladaptive behaviours among earthquake survivors with PTSD suggesting a greater severity among men. Interestingly, post-traumatic stress symptomatology appears to be a better correlate of these behaviours among women than among men, suggesting the need for further studies based on a gender approach.

  17. Gender Differences in the Relationship between Maladaptive Behaviors and Post-Traumatic Stress Disorder. A Study on 900 L' Aquila 2009 Earthquake Survivors.

    Science.gov (United States)

    Dell'osso, Liliana; Carmassi, Claudia; Stratta, Paolo; Massimetti, Gabriele; Akiskal, Kareen K; Akiskal, Hagop S; Maremmani, Icro; Rossi, Alessandro

    2012-01-01

    Post-traumatic stress disorder (PTSD) represents one of the most frequently psychiatric sequelae to earthquake exposure. Increasing evidence suggests the onset of maladaptive behaviors among veterans and adolescents with PTSD, with specific gender differences emerging in the latter. Aims of the present study were to investigate the relationships between maladaptive behaviors and PTSD in earthquake survivors, besides the gender differences in the type and prevalence of maladaptive behaviors and their association with PTSD. 900 residents of the town of L'Aquila who experienced the earthquake of April 6th 2009 (Richter Magnitude 6.3) were assessed by means of the Trauma and Loss Spectrum-Self Report (TALS-SR). Significantly higher maladaptive behavior prevalence rates were found among subjects with PTSD. A statistically significant association was found between male gender and the presence of at least one maladaptive behavior among PTSD survivors. Further, among survivors with PTSD significant correlations emerged between maladaptive coping and symptoms of re-experiencing, avoidance and numbing, and arousal in women, while only between maladaptive coping and avoidance and numbing in men. Our results show high rates of maladaptive behaviors among earthquake survivors with PTSD suggesting a greater severity among men. Interestingly, post-traumatic stress symptomatology appears to be a better correlate of these behaviors among women than among men, suggesting the need for further studies based on a gender approach.

  18. Robust Fault Diagnosis Design for Linear Multiagent Systems with Incipient Faults

    Directory of Open Access Journals (Sweden)

    Jingping Xia

    2015-01-01

    Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

  19. Active fault traces along Bhuj Fault and Katrol Hill Fault, and ...

    Indian Academy of Sciences (India)

    face, passing through the alluvial-colluvial fan at location 2. The gentle warping of the surface was completely modified because of severe cultivation practice. Therefore, it was difficult to confirm it in field. To the south ... scarp has been modified by present day farming. At location 5 near Wandhay village, an active fault trace ...

  20. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  1. Managing Space System Faults: Coalescing NASA's Views

    Science.gov (United States)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  2. Synthesis of Fault-Tolerant Embedded Systems

    DEFF Research Database (Denmark)

    Eles, Petru; Izosimov, Viacheslav; Pop, Paul

    2008-01-01

    This work addresses the issue of design optimization for fault- tolerant hard real-time systems. In particular, our focus is on the handling of transient faults using both checkpointing with rollback recovery and active replication. Fault tolerant schedules are generated based on a conditional...... process graph representation. The formulated system synthesis approaches decide the assignment of fault-tolerance policies to processes, the optimal placement of checkpoints and the mapping of processes to processors, such that multiple transient faults are tolerated, transparency requirements...

  3. Diagnosis and Fault-tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Kinnaert, Michel; Lunze, Jan

    the applicability of the presented methods. The theoretical results are illustrated by two running examples which are used throughout the book. The book addresses engineering students, engineers in industry and researchers who wish to get a survey over the variety of approaches to process diagnosis and fault......The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process...

  4. Computer aided construction of fault tree

    International Nuclear Information System (INIS)

    Kovacs, Z.

    1982-01-01

    Computer code CAT for the automatic construction of the fault tree is briefly described. Code CAT makes possible simple modelling of components using decision tables, it accelerates the fault tree construction process, constructs fault trees of different complexity, and is capable of harmonized co-operation with programs PREPandKITT 1,2 for fault tree analysis. The efficiency of program CAT and thus the accuracy and completeness of fault trees constructed significantly depends on the compilation and sophistication of decision tables. Currently, program CAT is used in co-operation with programs PREPandKITT 1,2 in reliability analyses of nuclear power plant systems. (B.S.)

  5. Active fault detection in MIMO systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    The focus in this paper is on active fault detection (AFD) for MIMO systems with parametric faults. The problem of design of auxiliary inputs with respect to detection of parametric faults is investigated. An analysis of the design of auxiliary inputs is given based on analytic transfer functions...... from auxiliary input to residual outputs. The analysis is based on a singular value decomposition of these transfer functions Based on this analysis, it is possible to design auxiliary input as well as design of the associated residual vector with respect to every single parametric fault in the system...... such that it is possible to detect these faults....

  6. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  7. Mesoscopic Structural Observations of Cores from the Chelungpu Fault System, Taiwan Chelungpu-Fault Drilling Project Hole-A, Taiwan

    Directory of Open Access Journals (Sweden)

    Hiroki Sone

    2007-01-01

    Full Text Available Structural characteristics of fault rocks distributed within major fault zones provide basic information in understanding the physical aspects of faulting. Mesoscopic structural observations of the drilledcores from Taiwan Chelungpu-fault Drilling Project Hole-A are reported in this article to describe and reveal the distribution of fault rocks within the Chelungpu Fault System.

  8. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  9. Development of methods for evaluating active faults

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The report for long-term evaluation of active faults was published by the Headquarters for Earthquake Research Promotion on Nov. 2010. After occurrence of the 2011 Tohoku-oki earthquake, the safety review guide with regard to geology and ground of site was revised by the Nuclear Safety Commission on Mar. 2012 with scientific knowledges of the earthquake. The Nuclear Regulation Authority established on Sep. 2012 is newly planning the New Safety Design Standard related to Earthquakes and Tsunamis of Light Water Nuclear Power Reactor Facilities. With respect to those guides and standards, our investigations for developing the methods of evaluating active faults are as follows; (1) For better evaluation on activities of offshore fault, we proposed a work flow to date marine terrace (indicator for offshore fault activity) during the last 400,000 years. We also developed the analysis of fault-related fold for evaluating of blind fault. (2) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (3) To reduce uncertainties of fault activities and frequency of earthquakes, we compiled the survey data and possible errors. (4) For improving seismic hazard analysis, we compiled the fault activities of the Yunotake and Itozawa faults, induced by the 2011 Tohoku-oki earthquake. (author)

  10. ESR dating of the fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2004-01-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs, grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Ulzin nuclear reactor. ESR signals of quartz grains separated from fault rocks collected from the E-W trend fault are saturated. This indicates that the last movement of these faults had occurred before the quaternary period. ESR dates from the NW trend faults range from 300ka to 700ka. On the other hand, ESR date of the NS trend fault is about 50ka. Results of this research suggest that long-term cyclic fault activity near the Ulzin nuclear reactor continued into the pleistocene.

  11. Monte Carlo Particle Lists: MCPL

    DEFF Research Database (Denmark)

    Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik

    2017-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... simulation packages. Program summary: Program Title: MCPL. Program Files doi: http://dx.doi.org/10.17632/cby92vsv5g.1 Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving...

  12. Fault Recoverability Analysis via Cross-Gramian

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza

    2016-01-01

    Engineering systems are vulnerable to different kinds of faults. Faults may compromise safety, cause sub-optimal operation and decline in performance if not preventing the whole system from functioning. Fault tolerant control (FTC) methods ensure that the system performance maintains within...... with feedback control. Fault recoverability provides important and useful information which could be used in analysis and design. However, computing fault recoverability is numerically expensive. In this paper, a new approach for computation of fault recoverability for bilinear systems is proposed...... approach for computation of fault recoverability is proposed which reduces the computational burden significantly. The proposed results are used for an electro-hydraulic drive to reveal the redundant actuating capabilities in the system....

  13. Active fault diagnosis by controller modification

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    2010-01-01

    Two active fault diagnosis methods for additive or parametric faults are proposed. Both methods are based on controller reconfiguration rather than on requiring an exogenous excitation signal, as it is otherwise common in active fault diagnosis. For the first method, it is assumed that the system...... considered is controlled by an observer-based controller. The method is then based on a number of alternate observers, each designed to be sensitive to one or more additive faults. Periodically, the observer part of the controller is changed into the sequence of fault sensitive observers. This is done...... in a way that guarantees the continuity of transition and global stability using a recent result on observer parameterization. An illustrative example inspired by a field study of a drag racing vehicle is given. For the second method, an active fault diagnosis method for parametric faults is proposed...

  14. Improved DFIG Capability during Asymmetrical Grid Faults

    DEFF Research Database (Denmark)

    Zhou, Dao; Blaabjerg, Frede

    2015-01-01

    In the wind power application, different asymmetrical types of the grid fault can be categorized after the Y/d transformer, and the positive and negative components of a single-phase fault, phase-to-phase fault, and two-phase fault can be summarized. Due to the newly introduced negative and even...... the natural component of the Doubly-Fed Induction Generator (DFIG) stator flux during the fault period, their effects on the rotor voltage can be investigated. It is concluded that the phase-to-phase fault has the worst scenario due to its highest introduction of the negative stator flux. Afterwards......, the capability of a 2 MW DFIG to ride through asymmetrical grid faults can be estimated at the existing design of the power electronics converter. Finally, a control scheme aimed to improve the DFIG capability is proposed and the simulation results validate its feasibility....

  15. Fault kinematics and localised inversion within the Troms-Finnmark Fault Complex, SW Barents Sea

    Science.gov (United States)

    Zervas, I.; Omosanya, K. O.; Lippard, S. J.; Johansen, S. E.

    2018-04-01

    The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

  16. Mont Terri project, cyclic deformations in the Opalinus Clay

    International Nuclear Information System (INIS)

    Moeri, A.; Bossart, P.; Matray, J.M.; Mueller, H.; Frank, E.

    2010-01-01

    Document available in extended abstract form only. Shrinkage structures in the Opalinus Clay, related to seasonal changes in temperature and humidity, are observed on the tunnel walls of the Mont Terri Rock Laboratory. The structures open in winter, when relative humidity in the tunnel decreases to 65%. In summer the cracks close again because of the increase in the clay volume when higher humidity causes rock swelling. Shrinkage structures are monitored in the Mont Terri Rock Laboratory at two different sites within the undisturbed rock matrix and a major fault zone. The relative movements of the rock on both sides of the cracks are monitored in three directions and compared to the fluctuations in ambient relative humidity and temperature. The cyclic deformations (CD) experiment aims to quantify the variations in crack opening in relation to the evolution of climatic conditions and to identify the processes underlying these swell and shrinkage cycles. It consists of the following tasks: - Measuring and quantifying the long-term (now up to three yearly cycles) opening and closing and, if present, the associated shear displacements of selected shrinkage cracks along an undisturbed bedding plane as well as within a major fault zone ('Main Fault'). The measurements are accompanied by temperature and humidity records as well as by a long-term monitoring of tunnel convergence. - Analysing at the micro-scale the surfaces of the crack planes to identify potential relative movements, changes in the rock fabric on the crack surfaces and the formation of fault gouge material as observed in closed cracks. - Processing and analysing measured fluctuations of crack apertures and rock deformation in the time series as well as in the hydro-meteorological variables, in particular relative humidity Hr(t) and air temperature. - Studying and reconstructing the opening cycles on a drill-core sample under well-known laboratory conditions and observing potential propagation of

  17. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  18. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  19. Comparison of methods for uncertainty analysis of nuclear-power-plant safety-system fault-tree models

    International Nuclear Information System (INIS)

    Martz, H.F.; Beckman, R.J.; Campbell, K.; Whiteman, D.E.; Booker, J.M.

    1983-04-01

    A comparative evaluation is made of several methods for propagating uncertainties in actual coupled nuclear power plant safety system faults tree models. The methods considered are Monte Carlo simulation, the method of moments, a discrete distribution method, and a bootstrap method. The Monte Carlo method is found to be superior. The sensitivity of the system unavailability distribution to the choice of basic event unavailability distribution is also investigated. The system distribution is also investigated. The system distribution is especially sensitive to the choice of symmetric versus asymmetric basic event distributions. A quick-and dirty method for estimating percentiles of the system unavailability distribution is developed. The method identifies the appropriate basic event distribution percentiles that should be used in evaluating the Boolean system equivalent expression for a given fault tree model to arrive directly at the 5th, 10th, 50th, 90th, and 95th percentiles of the system unavailability distribution

  20. Superconducting dc fault current limiter

    International Nuclear Information System (INIS)

    Cointe, Y.

    2007-12-01

    Within the framework of the electric power market liberalization, DC networks have many interests compared to alternative ones, but their protections need to use new systems. Superconducting fault current limiters enable by an overstepping of the critical current to limit the fault current to a preset value, lower than the theoretical short-circuit current. For these applications, coated conductors offer excellent opportunities. We worked on the implementation of these materials and built a test bench. We carried out limiting experiments to estimate the quench homogeneity at various short-circuit parameters. An important point is the temperature measurement by deposited sensors on the ribbon, results are in good correlation with the theoretical models. Improved quench behaviours for temperatures close to the critical temperature have been confirmed. Our results enable to better understand the limitation mechanisms of coated conductors. (author)

  1. Perspective View, San Andreas Fault

    Science.gov (United States)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour

  2. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun

    1994-02-01

    In this work, the Fuzzy Signed Digraph(FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators

  3. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    1994-01-01

    In this work, the Fuzzy Signed Digraph (FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators. (Author)

  4. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  5. Monte Carlo surface flux tallies

    International Nuclear Information System (INIS)

    Favorite, Jeffrey A.

    2010-01-01

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  6. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  7. Thermal studies of a superconducting current limiter using Monte-Carlo method

    International Nuclear Information System (INIS)

    Leveque, J.; Rezzoug, A.

    1999-01-01

    Considering the increase of the fault current level in electrical network, the current limiters become very interesting. The superconducting limiters are based on the quasi-instantaneous intrinsic transition from superconducting state to normal resistive one. Without detection of default or given order, they reduce the constraints supported by electrical installations above the fault. To avoid the destruction of the superconducting coil, the temperature must not exceed a certain value. Therefore the design of a superconducting coil needs the simultaneous resolution of an electrical equation and a thermal one. This papers deals with a resolution of this coupled problem by the method of Monte-Carlo. This method allows us to calculate the evolution of the resistance of the coil as well as the current of limitation. Experimental results are compared with theoretical ones. (orig.)

  8. Watching Faults Grow in Sand

    Science.gov (United States)

    Cooke, M. L.

    2015-12-01

    Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.

  9. A fault detection and diagnosis in a PWR steam generator

    International Nuclear Information System (INIS)

    Park, Seung Yub

    1991-01-01

    The purpose of this study is to develop a fault detection and diagnosis scheme that can monitor process fault and instrument fault of a steam generator. The suggested scheme consists of a Kalman filter and two bias estimators. Method of detecting process and instrument fault in a steam generator uses the mean test on the residual sequence of Kalman filter, designed for the unfailed system, to make a fault decision. Once a fault is detected, two bias estimators are driven to estimate the fault and to discriminate process fault and instrument fault. In case of process fault, the fault diagnosis of outlet temperature, feed-water heater and main steam control valve is considered. In instrument fault, the fault diagnosis of steam generator's three instruments is considered. Computer simulation tests show that on-line prompt fault detection and diagnosis can be performed very successfully.(Author)

  10. Research of fault activity in Japan

    International Nuclear Information System (INIS)

    Nohara, T.; Nakatsuka, N.; Takeda, S.

    2004-01-01

    Six hundreds and eighty earthquakes causing significant damage have been recorded since the 7. century in Japan. It is important to recognize faults that will or are expected to be active in future in order to help reduce earthquake damage, estimate earthquake damage insurance and siting of nuclear facilities. Such faults are called 'active faults' in Japan, the definition of which is a fault that has moved intermittently for at least several hundred thousand years and is expected to continue to do so in future. Scientific research of active faults has been ongoing since the 1930's. Many results indicated that major earthquakes and fault movements in shallow crustal regions in Japan occurred repeatedly at existing active fault zones during the past. After the 1995 Southern Hyogo Prefecture Earthquake, 98 active fault zones were selected for fundamental survey, with the purpose of efficiently conducting an active fault survey in 'Plans for Fundamental Seismic Survey and Observation' by the headquarters for earthquake research promotion, which was attached to the Prime Minister's office of Japan. Forty two administrative divisions for earthquake disaster prevention have investigated the distribution and history of fault activity of 80 active fault zones. Although earthquake prediction is difficult, the behaviour of major active faults in Japan is being recognised. Japan Nuclear Cycle Development Institute (JNC) submitted a report titled 'H12: Project to Establish the. Scientific and Technical Basis for HLW Disposal in Japan' to the Atomic Energy Commission (AEC) of Japan for official review W. The Guidelines, which were defined by AEC, require the H12 Project to confirm the basic technical feasibility of safe HLW disposal in Japan. In this report the important issues relating to fault activity were described that are to understand the characteristics of current fault movements and the spatial extent and magnitude of the effects caused by these movements, and to

  11. Fault tolerant operation of switched reluctance machine

    Science.gov (United States)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  12. Robust Mpc for Actuator–Fault Tolerance Using Set–Based Passive Fault Detection and Active Fault Isolation

    Directory of Open Access Journals (Sweden)

    Xu Feng

    2017-03-01

    Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

  13. Hbim Challenge among the Paradigm of Complexity, Tools and Preservation: the Basilica DI Collemaggio 8 Years after the Earthquake (l'aquila)

    Science.gov (United States)

    Brumana, R.; Della Torre, S.; Oreni, D.; Previtali, M.; Cantini, L.; Barazzetti, L.; Franchi, A.; Banfi, F.

    2017-08-01

    In December 2012 ENIservizi (the Italian multi-national energy agency operating in many countries), after the Earthquake that occurred in April 2009, decided to undertake the project `Re-start from Collemaggio' with the aim of giving new hope to the L'Aquila community, funding around 14 million Euro to restore the Basilica di Collemaggio. The Superintendence Office carried on the restoration project with the scientific support of the Università degli Studi de L'Aquila and the Università La Sapienza di Roma, under the coordination of the Politecnico di Milano. ENIservizi, aware of the BIM potential in the complex building and infrastructure domain in the world, required an advanced HBIM from the laser scanner and photogrammetric surveying to support the diagnostic analysis, the design project, the tender and the restoration itself, today still on course. Plans and vertical sections were delivered (2012) starting from the surveying campaigns (February and June 2013), together with the first HBIM advancement from the end of 2012 in support of the preliminary-definitive-executive steps of the restoration design project (2013-14-15). Five years later, this paper tries to make a synthesis of the different lessons learnt, in addition to the positive and critical aspects relating HBIM feasibility, sustainability and usefulness to the challenging restoration work. In particular, the Collemaggio BIM experience anticipated the new Italian Public Procurement Legislation (D.Lgs 50/2016, Nuovo Codice degli Appalti pubblici) aligned with to the EUPPD 24/2014: the EU Directive on Public Procurement asked all the 28 EU countries to adopt building informative modelling by February 2016 in order to support the whole LCM (Life Cycle Management), starting from the project and the intervention, through rewarding scores or mandatory regulations. Many analyses foresees to save from around 5% to 15% of the overall investment by adopting mature BIM (Level 3 to 5), particularly 4D remotely

  14. HBIM CHALLENGE AMONG THE PARADIGM OF COMPLEXITY, TOOLS AND PRESERVATION: THE BASILICA DI COLLEMAGGIO 8 YEARS AFTER THE EARTHQUAKE (L’AQUILA

    Directory of Open Access Journals (Sweden)

    R. Brumana

    2017-08-01

    Full Text Available In December 2012 ENIservizi (the Italian multi-national energy agency operating in many countries, after the Earthquake that occurred in April 2009, decided to undertake the project ‘Re-start from Collemaggio’ with the aim of giving new hope to the L’Aquila community, funding around 14 million Euro to restore the Basilica di Collemaggio. The Superintendence Office carried on the restoration project with the scientific support of the Università degli Studi de L’Aquila and the Università La Sapienza di Roma, under the coordination of the Politecnico di Milano. ENIservizi, aware of the BIM potential in the complex building and infrastructure domain in the world, required an advanced HBIM from the laser scanner and photogrammetric surveying to support the diagnostic analysis, the design project, the tender and the restoration itself, today still on course. Plans and vertical sections were delivered (2012 starting from the surveying campaigns (February and June 2013, together with the first HBIM advancement from the end of 2012 in support of the preliminary-definitive-executive steps of the restoration design project (2013-14-15. Five years later, this paper tries to make a synthesis of the different lessons learnt, in addition to the positive and critical aspects relating HBIM feasibility, sustainability and usefulness to the challenging restoration work. In particular, the Collemaggio BIM experience anticipated the new Italian Public Procurement Legislation (D.Lgs 50/2016, Nuovo Codice degli Appalti pubblici aligned with to the EUPPD 24/2014: the EU Directive on Public Procurement asked all the 28 EU countries to adopt building informative modelling by February 2016 in order to support the whole LCM (Life Cycle Management, starting from the project and the intervention, through rewarding scores or mandatory regulations. Many analyses foresees to save from around 5% to 15% of the overall investment by adopting mature BIM (Level 3 to 5

  15. Effects of three-dimensional crustal structure and smoothing constraint on earthquake slip inversions: Case study of the Mw6.3 2009 L'Aquila earthquake

    KAUST Repository

    Gallovič, František; Imperatori, Walter; Mai, Paul Martin

    2015-01-01

    Earthquake slip inversions aiming to retrieve kinematic rupture characteristics typically assume 1-D velocity models and a flat Earth surface. However, heterogeneous nature of the crust and presence of rough topography lead to seismic scattering and other wave propagation phenomena, introducing complex 3-D effects on ground motions. Here we investigate how the use of imprecise Green's functions - achieved by including 3-D velocity perturbations and topography - affect slip-inversion results. We create sets of synthetic seismograms, including 3-D heterogeneous Earth structure and topography, and then invert these synthetics using Green's functions computed for a horizontally layered 1-D Earth model. We apply a linear inversion, regularized by smoothing and positivity constraint, and examine in detail how smoothing effects perturb the solution. Among others, our tests and resolution analyses demonstrate how imprecise Green's functions introduce artificial slip rate multiples especially at shallow depths and that the timing of the peak slip rate is hardly affected by the chosen smoothing. The investigation is extended to recordings of the 2009 Mw6.3 L'Aquila earthquake, considering both strong motion and high-rate GPS stations. We interpret the inversion results taking into account the lessons learned from the synthetic tests. The retrieved slip model resembles previously published solutions using geodetic data, showing a large-slip asperity southeast of the hypocenter. In agreement with other studies, we find evidence for fast but subshear rupture propagation in updip direction, followed by a delayed propagation along strike. We conjecture that rupture was partially inhibited by a deep localized velocity-strengthening patch that subsequently experienced afterslip.

  16. Nuclear and Mitochondrial DNA Analyses of Golden Eagles (Aquila chrysaetos canadensis from Three Areas in Western North America; Initial Results and Conservation Implications.

    Directory of Open Access Journals (Sweden)

    Erica H Craig

    Full Text Available Understanding the genetics of a population is a critical component of developing conservation strategies. We used archived tissue samples from golden eagles (Aquila chrysaetos canadensis in three geographic regions of western North America to conduct a preliminary study of the genetics of the North American subspecies, and to provide data for United States Fish and Wildlife Service (USFWS decision-making for golden eagle management. We used a combination of mitochondrial DNA (mtDNA D-loop sequences and 16 nuclear DNA (nDNA microsatellite loci to investigate the extent of gene flow among our sampling areas in Idaho, California and Alaska and to determine if we could distinguish birds from the different geographic regions based on their genetic profiles. Our results indicate high genetic diversity, low genetic structure and high connectivity. Nuclear DNA Fst values between Idaho and California were low but significantly different from zero (0.026. Bayesian clustering methods indicated a single population, and we were unable to distinguish summer breeding residents from different regions. Results of the mtDNA AMOVA showed that most of the haplotype variation (97% was within the geographic populations while 3% variation was partitioned among them. One haplotype was common to all three areas. One region-specific haplotype was detected in California and one in Idaho, but additional sampling is required to determine if these haplotypes are unique to those geographic areas or a sampling artifact. We discuss potential sources of the high gene flow for this species including natal and breeding dispersal, floaters, and changes in migratory behavior as a result of environmental factors such as climate change and habitat alteration. Our preliminary findings can help inform the USFWS in development of golden eagle management strategies and provide a basis for additional research into the complex dynamics of the North American subspecies.

  17. Nuclear and Mitochondrial DNA Analyses of Golden Eagles (Aquila chrysaetos canadensis) from Three Areas in Western North America; Initial Results and Conservation Implications.

    Science.gov (United States)

    Craig, Erica H; Adams, Jennifer R; Waits, Lisette P; Fuller, Mark R; Whittington, Diana M

    2016-01-01

    Understanding the genetics of a population is a critical component of developing conservation strategies. We used archived tissue samples from golden eagles (Aquila chrysaetos canadensis) in three geographic regions of western North America to conduct a preliminary study of the genetics of the North American subspecies, and to provide data for United States Fish and Wildlife Service (USFWS) decision-making for golden eagle management. We used a combination of mitochondrial DNA (mtDNA) D-loop sequences and 16 nuclear DNA (nDNA) microsatellite loci to investigate the extent of gene flow among our sampling areas in Idaho, California and Alaska and to determine if we could distinguish birds from the different geographic regions based on their genetic profiles. Our results indicate high genetic diversity, low genetic structure and high connectivity. Nuclear DNA Fst values between Idaho and California were low but significantly different from zero (0.026). Bayesian clustering methods indicated a single population, and we were unable to distinguish summer breeding residents from different regions. Results of the mtDNA AMOVA showed that most of the haplotype variation (97%) was within the geographic populations while 3% variation was partitioned among them. One haplotype was common to all three areas. One region-specific haplotype was detected in California and one in Idaho, but additional sampling is required to determine if these haplotypes are unique to those geographic areas or a sampling artifact. We discuss potential sources of the high gene flow for this species including natal and breeding dispersal, floaters, and changes in migratory behavior as a result of environmental factors such as climate change and habitat alteration. Our preliminary findings can help inform the USFWS in development of golden eagle management strategies and provide a basis for additional research into the complex dynamics of the North American subspecies.

  18. [Neural activity related to emotional and empathic deficits in subjects with post-traumatic stress disorder who survived the L'Aquila (Central Italy) 2009 earthquake].

    Science.gov (United States)

    Mazza, Monica; Pino, Maria Chiara; Tempesta, Daniela; Catalucci, Alessia; Masciocchi, Carlo; Ferrara, Michele

    2016-01-01

    Post-Traumatic Stress Disorder (PTSD) is a chronic anxiety disorder. The continued efforts to control the distressing memories by traumatized individuals, together with the reduction of responsiveness to the outside world, are called Emotional Numbing (EN). The EN is one of the central symptoms in PTSD and it plays an integral role not only in the development and maintenance of post-traumatic symptomatology, but also in the disability of emotional regulation. This disorder shows an abnormal response of cortical and limbic regions which are normally involved in understanding emotions since the very earliest stages of the development of processing ability. Patients with PTSD exhibit exaggerated brain responses to emotionally negative stimuli. Identifying the neural correlates of emotion regulation in these subjects is important for elucidating the neural circuitry involved in emotional and empathic dysfunction. We showed that PTSD patients, all survivors of the L'Aquila 2009 earthquake, have a higher sensitivity to negative emotion and lower empathy levels. These emotional and empathic deficits are accompanied by neural brain functional correlates. Indeed PTSD subjects exhibit functional abnormalities in brain regions that are involved in stress regulation and emotional responses. The reduced activation of the frontal areas and a stronger activation of the limbic areas when responding to emotional stimuli could lead the subjects to enact coping strategies aimed at protecting themselves from the re-experience of pain related to traumatic events. This would result in a dysfunctional hyperactivation of subcortical areas, which may cause emotional distress and, consequently, impaired social relationships often reported by PTSD patients.

  19. SPECTRAL-TIMING ANALYSIS OF THE LOWER kHz QPO IN THE LOW-MASS X-RAY BINARY AQUILA X-1

    Energy Technology Data Exchange (ETDEWEB)

    Troyer, Jon S.; Cackett, Edward M., E-mail: jon.troyer@wayne.edu [Department of Physics and Astronomy, Wayne State University, 666 W. Hancock St, Detroit, MI 48201 (United States)

    2017-01-10

    Spectral-timing products of kilohertz quasi-periodic oscillations (kHz QPOs) in low-mass X-ray binary (LMXB) systems, including energy- and frequency-dependent lags, have been analyzed previously in 4U 1608-52, 4U 1636-53, and 4U 1728-34. Here, we study the spectral-timing properties of the lower kHz QPO of the neutron star LMXB Aquila X-1 for the first time. We compute broadband energy lags as well as energy-dependent lags and the covariance spectrum using data from the Rossi X-ray Timing Explorer . We find characteristics similar to those of previously studied systems, including soft lags of ∼30 μ s between the 3.0–8.0 keV and 8.0–20.0 keV energy bands at the average QPO frequency. We also find lags that show a nearly monotonic trend with energy, with the highest-energy photons arriving first. The covariance spectrum of the lower kHz QPO is well fit by a thermal Comptonization model, though we find a seed photon temperature higher than that of the mean spectrum, which was also seen in Peille et al. and indicates the possibility of a composite boundary layer emitting region. Lastly, we see in one set of observations an Fe K component in the covariance spectrum at 2.4- σ confidence, which may raise questions about the role of reverberation in the production of lags.

  20. Nuclear and mitochondrial DNA analyses of golden eagles (Aquila chrysaetos canadensis) from three areas in western North America; initial results and conservation implications

    Science.gov (United States)

    Craig, Erica H; Adams, Jennifer R.; Waits, Lisette P.; Fuller, Mark R.; Whittington, Diana M.

    2016-01-01

    Understanding the genetics of a population is a critical component of developing conservation strategies. We used archived tissue samples from golden eagles (Aquila chrysaetos canadensis) in three geographic regions of western North America to conduct a preliminary study of the genetics of the North American subspecies, and to provide data for United States Fish and Wildlife Service (USFWS) decision-making for golden eagle management. We used a combination of mitochondrial DNA (mtDNA) D-loop sequences and 16 nuclear DNA (nDNA) microsatellite loci to investigate the extent of gene flow among our sampling areas in Idaho, California and Alaska and to determine if we could distinguish birds from the different geographic regions based on their genetic profiles. Our results indicate high genetic diversity, low genetic structure and high connectivity. Nuclear DNA Fst values between Idaho and California were low but significantly different from zero (0.026). Bayesian clustering methods indicated a single population, and we were unable to distinguish summer breeding residents from different regions. Results of the mtDNA AMOVA showed that most of the haplotype variation (97%) was within the geographic populations while 3% variation was partitioned among them. One haplotype was common to all three areas. One region-specific haplotype was detected in California and one in Idaho, but additional sampling is required to determine if these haplotypes are unique to those geographic areas or a sampling artifact. We discuss potential sources of the high gene flow for this species including natal and breeding dispersal, floaters, and changes in migratory behavior as a result of environmental factors such as climate change and habitat alteration. Our preliminary findings can help inform the USFWS in development of golden eagle management strategies and provide a basis for additional research into the complex dynamics of the North American subspecies.

  1. Effects of three-dimensional crustal structure and smoothing constraint on earthquake slip inversions: Case study of the Mw6.3 2009 L'Aquila earthquake

    KAUST Repository

    Gallovič, František

    2015-01-01

    Earthquake slip inversions aiming to retrieve kinematic rupture characteristics typically assume 1-D velocity models and a flat Earth surface. However, heterogeneous nature of the crust and presence of rough topography lead to seismic scattering and other wave propagation phenomena, introducing complex 3-D effects on ground motions. Here we investigate how the use of imprecise Green\\'s functions - achieved by including 3-D velocity perturbations and topography - affect slip-inversion results. We create sets of synthetic seismograms, including 3-D heterogeneous Earth structure and topography, and then invert these synthetics using Green\\'s functions computed for a horizontally layered 1-D Earth model. We apply a linear inversion, regularized by smoothing and positivity constraint, and examine in detail how smoothing effects perturb the solution. Among others, our tests and resolution analyses demonstrate how imprecise Green\\'s functions introduce artificial slip rate multiples especially at shallow depths and that the timing of the peak slip rate is hardly affected by the chosen smoothing. The investigation is extended to recordings of the 2009 Mw6.3 L\\'Aquila earthquake, considering both strong motion and high-rate GPS stations. We interpret the inversion results taking into account the lessons learned from the synthetic tests. The retrieved slip model resembles previously published solutions using geodetic data, showing a large-slip asperity southeast of the hypocenter. In agreement with other studies, we find evidence for fast but subshear rupture propagation in updip direction, followed by a delayed propagation along strike. We conjecture that rupture was partially inhibited by a deep localized velocity-strengthening patch that subsequently experienced afterslip.

  2. General Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Moore, J.G.

    1974-01-01

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  3. Monte Carlo lattice program KIM

    International Nuclear Information System (INIS)

    Cupini, E.; De Matteis, A.; Simonini, R.

    1980-01-01

    The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed

  4. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  5. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  6. Fuzzy fault diagnosis system of MCFC

    Institute of Scientific and Technical Information of China (English)

    Wang Zhenlei; Qian Feng; Cao Guangyi

    2005-01-01

    A kind of fault diagnosis system of molten carbonate fuel cell (MCFC) stack is proposed in this paper. It is composed of a fuzzy neural network (FNN) and a fault diagnosis element. FNN is able to deal with the information of the expert knowledge and the experiment data efficiently. It also has the ability to approximate any smooth system. FNN is used to identify the fault diagnosis model of MCFC stack. The fuzzy fault decision element can diagnose the state of the MCFC generating system, normal or fault, and can decide the type of the fault based on the outputs of FNN model and the MCFC system. Some simulation experiment results are demonstrated in this paper.

  7. Fault diagnosis based on controller modification

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2015-01-01

    Detection and isolation of parametric faults in closed-loop systems will be considered in this paper. A major problem is that a feedback controller will in general reduce the effects from variations in the systems including parametric faults on the controlled output from the system. Parametric...... faults can be detected and isolated using active methods, where an auxiliary input is applied. Using active methods for the diagnosis of parametric faults in closed-loop systems, the amplitude of the applied auxiliary input need to be increased to be able to detect and isolate the faults in a reasonable......-parameterization (after Youla, Jabr, Bongiorno and Kucera) for the controller, it is possible to modify the feedback controller with a minor effect on the closed-loop performance in the fault-free case and at the same time optimize the detection and isolation in a faulty case. Controller modification in connection...

  8. A setup for active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2006-01-01

    A setup for active fault diagnosis (AFD) of parametric faults in dynamic systems is formulated in this paper. It is shown that it is possible to use the same setup for both open loop systems, closed loop systems based on a nominal feedback controller as well as for closed loop systems based...... on a reconfigured feedback controller. This will make the proposed AFD approach very useful in connection with fault tolerant control (FTC). The setup will make it possible to let the fault diagnosis part of the fault tolerant controller remain unchanged after a change in the feedback controller. The setup for AFD...... is based on the YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all stabilizing feedback controllers and the dual YJBK parameterization. It is shown that the AFD is based directly on the dual YJBK transfer function matrix. This matrix will be named the fault signature matrix when...

  9. Fault displacement along the Naruto-South fault, the Median Tectonic Line active fault system in the eastern part of Shikoku, southwestern Japan

    OpenAIRE

    高田, 圭太; 中田, 高; 後藤, 秀昭; 岡田, 篤正; 原口, 強; 松木, 宏彰

    1998-01-01

    The Naruto-South fault is situated of about 1000m south of the Naruto fault, the Median Tectonic Line active fault system in the eastern part of Shikoku. We investigated fault topography and subsurface geology of this fault by interpretation of large scale aerial photographs, collecting borehole data and Geo-Slicer survey. The results obtained are as follows; 1) The Naruto-South fault runs on the Yoshino River deltaic plain at least 2.5 km long with fault scarplet. the Naruto-South fault is o...

  10. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  11. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  12. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  13. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  14. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  15. Fault Diagnosis in Deaerator Using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    S Srinivasan

    2007-01-01

    Full Text Available In this paper a fuzzy logic based fault diagnosis system for a deaerator in a power plant unit is presented. The system parameters are obtained using the linearised state space deaerator model. The fuzzy inference system is created and rule base are evaluated relating the parameters to the type and severity of the faults. These rules are fired for specific changes in system parameters and the faults are diagnosed.

  16. Qademah Fault Seismic Data Set - Northern Part

    KAUST Repository

    Hanafy, Sherif M.

    2015-01-01

    Objective: Is the Qademah fault that was detected in 2010 the main fault? We collected a long 2D profile, 526 m, where the fault that was detected in 2010 is at around 300 m. Layout: We collected 264 CSGs, each has 264 receivers. The shot and receiver interval is 2 m. We also collected an extra 48 CSGs with offset = 528 to 622 m with shot interval = 2 m. The receivers are the same as the main survey.

  17. Distributed bearing fault diagnosis based on vibration analysis

    Science.gov (United States)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  18. Identifiability of Additive Actuator and Sensor Faults by State Augmentation

    Science.gov (United States)

    Joshi, Suresh; Gonzalez, Oscar R.; Upchurch, Jason M.

    2014-01-01

    A class of fault detection and identification (FDI) methods for bias-type actuator and sensor faults is explored in detail from the point of view of fault identifiability. The methods use state augmentation along with banks of Kalman-Bucy filters for fault detection, fault pattern determination, and fault value estimation. A complete characterization of conditions for identifiability of bias-type actuator faults, sensor faults, and simultaneous actuator and sensor faults is presented. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have unknown biases. The fault identifiability conditions are demonstrated via numerical examples. The analytical and numerical results indicate that caution must be exercised to ensure fault identifiability for different fault patterns when using such methods.

  19. A Design Method for Fault Reconfiguration and Fault-Tolerant Control of a Servo Motor

    Directory of Open Access Journals (Sweden)

    Jing He

    2013-01-01

    Full Text Available A design scheme that integrates fault reconfiguration and fault-tolerant position control is proposed for a nonlinear servo system with friction. Analysis of the non-linear friction torque and fault in the system is used to guide design of a sliding mode position controller. A sliding mode observer is designed to achieve fault reconfiguration based on the equivalence principle. Thus, active fault-tolerant position control of the system can be realized. A real-time simulation experiment is performed on a hardware-in-loop simulation platform. The results show that the system reconfigures well for both incipient and abrupt faults. Under the fault-tolerant control mechanism, the output signal for the system position can rapidly track given values without being influenced by faults.

  20. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  1. Fault prediction for nonlinear stochastic system with incipient faults based on particle filter and nonlinear regression.

    Science.gov (United States)

    Ding, Bo; Fang, Huajing

    2017-05-01

    This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Integrating cyber attacks within fault trees

    International Nuclear Information System (INIS)

    Nai Fovino, Igor; Masera, Marcelo; De Cian, Alessio

    2009-01-01

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  3. Fault tolerant control design for hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hao; Jiang, Bin [Nanjing University of Aeronautics and Astronautics, Nanjing (China); Cocquempot, Vincent [Universite des Sciences et Technologies de Lille, Villeneuve d' Ascq (France)

    2010-07-01

    This book intends to provide the readers a good understanding on how to achieve Fault Tolerant Control goal of Hybrid Systems. The book can be used as a reference for the academic research on Fault Tolerant Control and Hybrid Systems or used in Ph.D. study of control theory and engineering. The knowledge background for this monograph would be some undergraduate and graduate courses on Fault Diagnosis and Fault Tolerant Control theory, linear system theory, nonlinear system theory, Hybrid Systems theory and Discrete Event System theory. (orig.)

  4. Soil radon levels across the Amer fault

    International Nuclear Information System (INIS)

    Font, Ll.; Baixeras, C.; Moreno, V.; Bach, J.

    2008-01-01

    Soil radon levels have been measured across the Amer fault, which is located near the volcanic region of La Garrotxa, Spain. Both passive (LR-115, time-integrating) and active (Clipperton II, time-resolved) detectors have been used in a survey in which 27 measurement points were selected in five lines perpendicular to the Amer fault in the village area of Amer. The averaged results show an influence of the distance to the fault on the mean soil radon values. The dynamic results show a very clear seasonal effect on the soil radon levels. The results obtained support the hypothesis that the fault is still active

  5. Integrating cyber attacks within fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Nai Fovino, Igor [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy)], E-mail: igor.nai@jrc.it; Masera, Marcelo [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy); De Cian, Alessio [Department of Electrical Engineering, University di Genova, Genoa (Italy)

    2009-09-15

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  6. Fault detection and fault-tolerant control for nonlinear systems

    CERN Document Server

    Li, Linlin

    2016-01-01

    Linlin Li addresses the analysis and design issues of observer-based FD and FTC for nonlinear systems. The author analyses the existence conditions for the nonlinear observer-based FD systems to gain a deeper insight into the construction of FD systems. Aided by the T-S fuzzy technique, she recommends different design schemes, among them the L_inf/L_2 type of FD systems. The derived FD and FTC approaches are verified by two benchmark processes. Contents Overview of FD and FTC Technology Configuration of Nonlinear Observer-Based FD Systems Design of L2 nonlinear Observer-Based FD Systems Design of Weighted Fuzzy Observer-Based FD Systems FTC Configurations for Nonlinear Systems< Application to Benchmark Processes Target Groups Researchers and students in the field of engineering with a focus on fault diagnosis and fault-tolerant control fields The Author Dr. Linlin Li completed her dissertation under the supervision of Prof. Steven X. Ding at the Faculty of Engineering, University of Duisburg-Essen, Germany...

  7. Study of fault diagnosis software design for complex system based on fault tree

    International Nuclear Information System (INIS)

    Yuan Run; Li Yazhou; Wang Jianye; Hu Liqin; Wang Jiaqun; Wu Yican

    2012-01-01

    Complex systems always have high-level reliability and safety requirements, and same does their diagnosis work. As a great deal of fault tree models have been acquired during the design and operation phases, a fault diagnosis method which combines fault tree analysis with knowledge-based technology has been proposed. The prototype of fault diagnosis software has been realized and applied to mobile LIDAR system. (authors)

  8. The morphology of strike-slip faults - Examples from the San Andreas Fault, California

    Science.gov (United States)

    Bilham, Roger; King, Geoffrey

    1989-01-01

    The dilatational strains associated with vertical faults embedded in a horizontal plate are examined in the framework of fault kinematics and simple displacement boundary conditions. Using boundary element methods, a sequence of examples of dilatational strain fields associated with commonly occurring strike-slip fault zone features (bends, offsets, finite rupture lengths, and nonuniform slip distributions) is derived. The combinations of these strain fields are then used to examine the Parkfield region of the San Andreas fault system in central California.

  9. Qademah Fault 3D Survey

    KAUST Repository

    Hanafy, Sherif M.

    2014-01-01

    Objective: Collect 3D seismic data at Qademah Fault location to 1. 3D traveltime tomography 2. 3D surface wave migration 3. 3D phase velocity 4. Possible reflection processing Acquisition Date: 26 – 28 September 2014 Acquisition Team: Sherif, Kai, Mrinal, Bowen, Ahmed Acquisition Layout: We used 288 receiver arranged in 12 parallel lines, each line has 24 receiver. Inline offset is 5 m and crossline offset is 10 m. One shot is fired at each receiver location. We use the 40 kgm weight drop as seismic source, with 8 to 15 stacks at each shot location.

  10. Fault-ignorant quantum search

    International Nuclear Information System (INIS)

    Vrana, Péter; Reeb, David; Reitzner, Daniel; Wolf, Michael M

    2014-01-01

    We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm. (paper)

  11. Fault Detection/Isolation Verification,

    Science.gov (United States)

    1982-08-01

    63 - A I MCC ’I UNCLASSIFIED SECURITY CLASSIPICATION OP THIS PAGE tMh*f Dal f&mered, REPORT D00CUMENTATION PAGE " .O ORM 1. REPORT NUM.9ft " 2. GOVT...test the performance of th .<ver) DO 2" 1473 EoIoTON OP iNov os i OSoLTe UNCLASSIFIED SECURITY CLASSIPICATION 0 T"IS PAGE (P 3 . at Sted) I...UNCLASSIFIED Acumy, C .AMICATIN Of THIS PAGS. (m ... DO&.m , Algorithm on these netowrks , several different fault scenarios were designed for each network. Each

  12. Passive Fault-tolerant Control of Discrete-time Piecewise Affine Systems against Actuator Faults

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Izadi-Zamanabadi, Roozbeh; Bak, Thomas

    2012-01-01

    In this paper, we propose a new method for passive fault-tolerant control of discrete time piecewise affine systems. Actuator faults are considered. A reliable piecewise linear quadratic regulator (LQR) state feedback is designed such that it can tolerate actuator faults. A sufficient condition f...... is illustrated on a numerical example and a two degree of freedom helicopter....

  13. Rectifier Fault Diagnosis and Fault Tolerance of a Doubly Fed Brushless Starter Generator

    Directory of Open Access Journals (Sweden)

    Liwei Shi

    2015-01-01

    Full Text Available This paper presents a rectifier fault diagnosis method with wavelet packet analysis to improve the fault tolerant four-phase doubly fed brushless starter generator (DFBLSG system reliability. The system components and fault tolerant principle of the high reliable DFBLSG are given. And the common fault of the rectifier is analyzed. The process of wavelet packet transforms fault detection/identification algorithm is introduced in detail. The fault tolerant performance and output voltage experiments were done to gather the energy characteristics with a voltage sensor. The signal is analyzed with 5-layer wavelet packets, and the energy eigenvalue of each frequency band is obtained. Meanwhile, the energy-eigenvalue tolerance was introduced to improve the diagnostic accuracy. With the wavelet packet fault diagnosis, the fault tolerant four-phase DFBLSG can detect the usual open-circuit fault and operate in the fault tolerant mode if there is a fault. The results indicate that the fault analysis techniques in this paper are accurate and effective.

  14. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    Science.gov (United States)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  15. Fault-tolerant reference generation for model predictive control with active diagnosis of elevator jamming faults

    NARCIS (Netherlands)

    Ferranti, L.; Wan, Y.; Keviczky, T.

    2018-01-01

    This paper focuses on the longitudinal control of an Airbus passenger aircraft in the presence of elevator jamming faults. In particular, in this paper, we address permanent and temporary actuator jamming faults using a novel reconfigurable fault-tolerant predictive control design. Due to their

  16. FSN-based fault modelling for fault detection and troubleshooting in CANDU stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, E., E-mail: elnara.nasimi@brucepower.com [Bruce Power LLP., Tiverton, Ontario(Canada); Gabbar, H.A. [Univ. of Ontario Inst. of Tech., Oshawa, Ontario (Canada)

    2013-07-01

    An accurate fault modeling and troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities of current and future generation of CANDU designs. This paper presents fault modeling approach using Fault Semantic Network (FSN) methodology with risk estimation. Its application is demonstrated using a case study of Bruce B zone-control level oscillations. (author)

  17. Fault Diagnosis and Fault Tolerant Control with Application on a Wind Turbine Low Speed Shaft Encoder

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Sardi, Hector Eloy Sanchez; Escobet, Teressa

    2015-01-01

    tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

  18. The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic

    Science.gov (United States)

    Armstrong, Curtis D.; Humphreys, William M.

    2003-01-01

    We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.

  19. Failure mode analysis using state variables derived from fault trees with application

    International Nuclear Information System (INIS)

    Bartholomew, R.J.

    1982-01-01

    Fault Tree Analysis (FTA) is used extensively to assess both the qualitative and quantitative reliability of engineered nuclear power systems employing many subsystems and components. FTA is very useful, but the method is limited by its inability to account for failure mode rate-of-change interdependencies (coupling) of statistically independent failure modes. The state variable approach (using FTA-derived failure modes as states) overcomes these difficulties and is applied to the determination of the lifetime distribution function for a heat pipe-thermoelectric nuclear power subsystem. Analyses are made using both Monte Carlo and deterministic methods and compared with a Markov model of the same subsystem

  20. Fault Management Techniques in Human Spaceflight Operations

    Science.gov (United States)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  1. Effects of Fault Displacement on Emplacement Drifts

    International Nuclear Information System (INIS)

    Duan, F.

    2000-01-01

    The purpose of this analysis is to evaluate potential effects of fault displacement on emplacement drifts, including drip shields and waste packages emplaced in emplacement drifts. The output from this analysis not only provides data for the evaluation of long-term drift stability but also supports the Engineered Barrier System (EBS) process model report (PMR) and Disruptive Events Report currently under development. The primary scope of this analysis includes (1) examining fault displacement effects in terms of induced stresses and displacements in the rock mass surrounding an emplacement drift and (2 ) predicting fault displacement effects on the drip shield and waste package. The magnitude of the fault displacement analyzed in this analysis bounds the mean fault displacement corresponding to an annual frequency of exceedance of 10 -5 adopted for the preclosure period of the repository and also supports the postclosure performance assessment. This analysis is performed following the development plan prepared for analyzing effects of fault displacement on emplacement drifts (CRWMS M and O 2000). The analysis will begin with the identification and preparation of requirements, criteria, and inputs. A literature survey on accommodating fault displacements encountered in underground structures such as buried oil and gas pipelines will be conducted. For a given fault displacement, the least favorable scenario in term of the spatial relation of a fault to an emplacement drift is chosen, and the analysis is then performed analytically. Based on the analysis results, conclusions are made regarding the effects and consequences of fault displacement on emplacement drifts. Specifically, the analysis will discuss loads which can be induced by fault displacement on emplacement drifts, drip shield and/or waste packages during the time period of postclosure

  2. Computer aided fault tree synthesis

    International Nuclear Information System (INIS)

    Poucet, A.

    1983-01-01

    Nuclear as well as non-nuclear organisations are showing during the past few years a growing interest in the field of reliability analysis. This urges for the development of powerful, state of the art methods and computer codes for performing such analysis on complex systems. In this report an interactive, computer aided approach is discussed, based on the well known fault tree technique. The time consuming and difficut task of manually constructing a system model (one or more fault trees) is replaced by an efficient interactive procedure in which the flexibility and the learning process inherent to the manual approach are combined with the accuracy in the modelling and the speed of the fully automatical approach. The method presented is based upon the use of a library containing component models. The possibility of setting up a standard library of models of general use and the link with a data collection system are discussed. The method has been implemented in the CAFTS-SALP software package which is described shortly in the report

  3. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    Science.gov (United States)

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  4. Methods for recognition and segmentation of active fault

    International Nuclear Information System (INIS)

    Hyun, Chang Hun; Noh, Myung Hyun; Lee, Kieh Hwa; Chang, Tae Woo; Kyung, Jai Bok; Kim, Ki Young

    2000-03-01

    In order to identify and segment the active faults, the literatures of structural geology, paleoseismology, and geophysical explorations were investigated. The existing structural geological criteria for segmenting active faults were examined. These are mostly based on normal fault systems, thus, the additional criteria are demanded for application to different types of fault systems. Definition of the seismogenic fault, characteristics of fault activity, criteria and study results of fault segmentation, relationship between segmented fault length and maximum displacement, and estimation of seismic risk of segmented faults were examined in paleoseismic study. The history of earthquake such as dynamic pattern of faults, return period, and magnitude of the maximum earthquake originated by fault activity can be revealed by the study. It is confirmed through various case studies that numerous geophysical explorations including electrical resistivity, land seismic, marine seismic, ground-penetrating radar, magnetic, and gravity surveys have been efficiently applied to the recognition and segmentation of active faults

  5. Density of oxidation-induced stacking faults in damaged silicon

    NARCIS (Netherlands)

    Kuper, F.G.; Hosson, J.Th.M. De; Verwey, J.F.

    1986-01-01

    A model for the relation between density and length of oxidation-induced stacking faults on damaged silicon surfaces is proposed, based on interactions of stacking faults with dislocations and neighboring stacking faults. The model agrees with experiments.

  6. A Fault-tolerant RISC Microprocessor for Spacecraft Applications

    Science.gov (United States)

    Timoc, Constantin; Benz, Harry

    1990-01-01

    Viewgraphs on a fault-tolerant RISC microprocessor for spacecraft applications are presented. Topics covered include: reduced instruction set computer; fault tolerant registers; fault tolerant ALU; and double rail CMOS logic.

  7. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  8. Fault-Tolerant Approach for Modular Multilevel Converters under Submodule Faults

    DEFF Research Database (Denmark)

    Deng, Fujin; Tian, Yanjun; Zhu, Rongwu

    2016-01-01

    The modular multilevel converter (MMC) is attractive for medium- or high-power applications because of the advantages of its high modularity, availability, and high power quality. The fault-tolerant operation is one of the important issues for the MMC. This paper proposed a fault-tolerant approach...... for the MMC under submodule (SM) faults. The characteristic of the MMC with arms containing different number of healthy SMs under faults is analyzed. Based on the characteristic, the proposed approach can effectively keep the MMC operation as normal under SM faults. It can effectively improve the MMC...

  9. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  10. Monte Carlo approaches to light nuclei

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs

  11. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  12. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  13. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  14. Monte Carlo Codes Invited Session

    International Nuclear Information System (INIS)

    Trama, J.C.; Malvagi, F.; Brown, F.

    2013-01-01

    This document lists 22 Monte Carlo codes used in radiation transport applications throughout the world. For each code the names of the organization and country and/or place are given. We have the following computer codes. 1) ARCHER, USA, RPI; 2) COG11, USA, LLNL; 3) DIANE, France, CEA/DAM Bruyeres; 4) FLUKA, Italy and CERN, INFN and CERN; 5) GEANT4, International GEANT4 collaboration; 6) KENO and MONACO (SCALE), USA, ORNL; 7) MC21, USA, KAPL and Bettis; 8) MCATK, USA, LANL; 9) MCCARD, South Korea, Seoul National University; 10) MCNP6, USA, LANL; 11) MCU, Russia, Kurchatov Institute; 12) MONK and MCBEND, United Kingdom, AMEC; 13) MORET5, France, IRSN Fontenay-aux-Roses; 14) MVP2, Japan, JAEA; 15) OPENMC, USA, MIT; 16) PENELOPE, Spain, Barcelona University; 17) PHITS, Japan, JAEA; 18) PRIZMA, Russia, VNIITF; 19) RMC, China, Tsinghua University; 20) SERPENT, Finland, VTT; 21) SUPERMONTECARLO, China, CAS INEST FDS Team Hefei; and 22) TRIPOLI-4, France, CEA Saclay

  15. Advanced computers and Monte Carlo

    International Nuclear Information System (INIS)

    Jordan, T.L.

    1979-01-01

    High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables

  16. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan

    2016-08-08

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  17. The Curiosity Mars Rover's Fault Protection Engine

    Science.gov (United States)

    Benowitz, Ed

    2014-01-01

    The Curiosity Rover, currently operating on Mars, contains flight software onboard to autonomously handle aspects of system fault protection. Over 1000 monitors and 39 responses are present in the flight software. Orchestrating these behaviors is the flight software's fault protection engine. In this paper, we discuss the engine's design, responsibilities, and present some lessons learned for future missions.

  18. A Game Theoretic Fault Detection Filter

    Science.gov (United States)

    Chung, Walter H.; Speyer, Jason L.

    1995-01-01

    The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

  19. Modular representation and analysis of fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Olmos, J; Wolf, L [Massachusetts Inst. of Tech., Cambridge (USA). Dept. of Nuclear Engineering

    1978-08-01

    An analytical method to describe fault tree diagrams in terms of their modular compositions is developed. Fault tree structures are characterized by recursively relating the top tree event to all its basic component inputs through a set of equations defining each of the modulus for the fault tree. It is shown that such a modular description is an extremely valuable tool for making a quantitative analysis of fault trees. The modularization methodology has been implemented into the PL-MOD computer code, written in PL/1 language, which is capable of modularizing fault trees containing replicated components and replicated modular gates. PL-MOD in addition can handle mutually exclusive inputs and explicit higher order symmetric (k-out-of-n) gates. The step-by-step modularization of fault trees performed by PL-MOD is demonstrated and it is shown how this procedure is only made possible through an extensive use of the list processing tools available in PL/1. A number of nuclear reactor safety system fault trees were analyzed. PL-MOD performed the modularization and evaluation of the modular occurrence probabilities and Vesely-Fussell importance measures for these systems very efficiently. In particular its execution time for the modularization of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using MOCUS, a code considered to be fast by present standards.

  20. Fault tolerancy in cooperative adaptive cruise control

    NARCIS (Netherlands)

    Nunen, E. van; Ploeg, J.; Medina, A.M.; Nijmeijer, H.

    2013-01-01

    Future mobility requires sound solutions in the field of fault tolerance in real-time applications amongst which Cooperative Adaptive Cruise Control (CACC). This control system cannot rely on the driver as a backup and is constantly active and therefore more prominent to the occurrences of faults

  1. Training for Skill in Fault Diagnosis

    Science.gov (United States)

    Turner, J. D.

    1974-01-01

    The Knitting, Lace and Net Industry Training Board has developed a training innovation called fault diagnosis training. The entire training process concentrates on teaching based on the experiences of troubleshooters or any other employees whose main tasks involve fault diagnosis and rectification. (Author/DS)

  2. Dynamic modeling of gearbox faults: A review

    Science.gov (United States)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  3. Detecting Fan Faults in refrigerated Cabinets

    DEFF Research Database (Denmark)

    Thybo, C.; Rasmussen, B.D.; Izadi-Zamanabadi, Roozbeh

    2002-01-01

    Fault detection in supermarket refrigeration systems is an important topic due to both economic and food safety reasons. If faults can be detected and diagnosed before the system drifts outside the specified operational envelope, service costs can be reduced and in extreme cases the costly discar...

  4. Fault Detection for a Diesel Engine Actuator

    DEFF Research Database (Denmark)

    Blanke, M.; Bøgh, S.A.; Jørgensen, R.B.

    1995-01-01

    An electro-mechanical position servo is introduced as a benchmark for mode-based Fault Detection and Identification (FDI).......An electro-mechanical position servo is introduced as a benchmark for mode-based Fault Detection and Identification (FDI)....

  5. Norm based design of fault detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Hans Henrik

    1999-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered in this paper from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well...

  6. Naive Fault Tree : formulation of the approach

    NARCIS (Netherlands)

    Rajabalinejad, M

    2017-01-01

    Naive Fault Tree (NFT) accepts a single value or a range of values for each basic event and returns values for the top event. This accommodates the need of commonly used Fault Trees (FT) for precise data making them prone to data concerns and limiting their area of application. This paper extends

  7. Intermittent resistive faults in digital cmos circuits

    NARCIS (Netherlands)

    Kerkhoff, Hans G.; Ebrahimi, Hassan

    2015-01-01

    A major threat in extremely dependable high-end process node integrated systems in e.g. Avionics are no failures found (NFF). One category of NFFs is the intermittent resistive fault, often originating from bad (e.g. Via or TSV-based) interconnections. This paper will show the impact of these faults

  8. Engine gearbox fault diagnosis using empirical mode ...

    Indian Academy of Sciences (India)

    Kiran Vernekar

    Department of Mechanical Engineering, National Institute of Technology Karnataka,. Surathkal ... proposed approach is an effective method for engine fault diagnosis. Keywords. Engine fault ... (DAQ) card and analysed using LabVIEW software from. Figure 1. .... and reduce numerical difficulties during the calculation. T.

  9. Statistical fault detection in photovoltaic systems

    KAUST Repository

    Garoudja, Elyes; Harrou, Fouzi; Sun, Ying; Kara, Kamel; Chouder, Aissa; Silvestre, Santiago

    2017-01-01

    and efficiency. Here, an innovative model-based fault-detection approach for early detection of shading of PV modules and faults on the direct current (DC) side of PV systems is proposed. This approach combines the flexibility, and simplicity of a one-diode model

  10. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

    International Nuclear Information System (INIS)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C.

    2004-01-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis

  11. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  12. Active fault diagnosis in closed-loop uncertain systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2006-01-01

    Fault diagnosis of parametric faults in closed-loop uncertain systems by using an auxiliary input vector is considered in this paper, i.e. active fault diagnosis (AFD). The active fault diagnosis is based directly on the socalled fault signature matrix, related to the YJBK (Youla, Jabr, Bongiorno...... and Kucera) parameterization. Conditions are given for exact detection and isolation of parametric faults in closed-loop uncertain systems....

  13. Stability of fault submitted to fluid injections

    Science.gov (United States)

    Brantut, N.; Passelegue, F. X.; Mitchell, T. M.

    2017-12-01

    Elevated pore pressure can lead to slip reactivation on pre-existing fractures and faults when the coulomb failure point is reached. From a static point of view, the reactivation of fault submitted to a background stress (τ0) is a function of the peak strength of the fault, i.e. the quasi-static effective friction coefficient (µeff). However, this theory is valid only when the entire fault is affected by fluid pressure, which is not the case in nature, and during human induced-seismicity. In this study, we present new results about the influence of the injection rate on the stability of faults. Experiments were conducted on a saw-cut sample of westerly granite. The experimental fault was 8 cm length. Injections were conducted through a 2 mm diameter hole reaching the fault surface. Experiments were conducted at four different order magnitudes fluid pressure injection rates (from 1 MPa/minute to 1 GPa/minute), in a fault system submitted to 50 and 100 MPa confining pressure. Our results show that the peak fluid pressure leading to slip depends on injection rate. The faster the injection rate, the larger the peak fluid pressure leading to instability. Wave velocity surveys across the fault highlighted that decreasing the injection-rate leads to an increase of size of the fluid pressure perturbation. Our result demonstrate that the stability of the fault is not only a function of the fluid pressure requires to reach the failure criterion, but is mainly a function of the ratio between the length of the fault affected by fluid pressure and the total fault length. In addition, we show that the slip rate increases with the background effective stress and with the intensity of the fluid pressure pertubation, i.e. with the excess shear stress acting on the part of the fault pertubated by fluid injection. Our results suggest that crustal fault can be reactivated by local high fluid overpressures. These results could explain the "large" magnitude human-induced earthquakes

  14. From tomographic images to fault heterogeneities

    Directory of Open Access Journals (Sweden)

    A. Amato

    1994-06-01

    Full Text Available Local Earthquake Tomography (LET is a useful tool for imaging lateral heterogeneities in the upper crust. The pattern of P- and S-wave velocity anomalies, in relation to the seismicity distribution along active fault zones. can shed light on the existence of discrete seismogenic patches. Recent tomographic studies in well monitored seismic areas have shown that the regions with large seismic moment release generally correspond to high velocity zones (HVZ's. In this paper, we discuss the relationship between the seismogenic behavior of faults and the velocity structure of fault zones as inferred from seismic tomography. First, we review some recent tomographic studies in active strike-slip faults. We show examples from different segments of the San Andreas fault system (Parkfield, Loma Prieta, where detailed studies have been carried out in recent years. We also show two applications of LET to thrust faults (Coalinga, Friuli. Then, we focus on the Irpinia normal fault zone (South-Central Italy, where a Ms = 6.9 earthquake occurred in 1980 and many thousands of attershock travel time data are available. We find that earthquake hypocenters concentrate in HVZ's, whereas low velocity zones (LVZ’ s appear to be relatively aseismic. The main HVZ's along which the mainshock rupture bas propagated may correspond to velocity weakening fault regions, whereas the LVZ's are probably related to weak materials undergoing stable slip (velocity strengthening. A correlation exists between this HVZ and the area with larger coseismic slip along the fault, according to both surface evidence (a fault scarp as high as 1 m and strong ground motion waveform modeling. Smaller wave-length, low-velocity anomalies detected along the fault may be the expression of velocity strengthening sections, where aseismic slip occurs. According to our results, the rupture at the nucleation depth (~ 10-12 km is continuous for the whole fault lenoth (~ 30 km, whereas at shallow depth

  15. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  16. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2010-02-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  17. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  18. Fault Detection for Automotive Shock Absorber

    Science.gov (United States)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  19. Solving fault diagnosis problems linear synthesis techniques

    CERN Document Server

    Varga, Andreas

    2017-01-01

    This book addresses fault detection and isolation topics from a computational perspective. Unlike most existing literature, it bridges the gap between the existing well-developed theoretical results and the realm of reliable computational synthesis procedures. The model-based approach to fault detection and diagnosis has been the subject of ongoing research for the past few decades. While the theoretical aspects of fault diagnosis on the basis of linear models are well understood, most of the computational methods proposed for the synthesis of fault detection and isolation filters are not satisfactory from a numerical standpoint. Several features make this book unique in the fault detection literature: Solution of standard synthesis problems in the most general setting, for both continuous- and discrete-time systems, regardless of whether they are proper or not; consequently, the proposed synthesis procedures can solve a specific problem whenever a solution exists Emphasis on the best numerical algorithms to ...

  20. Fault Reconnaissance Agent for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Elhadi M. Shakshuki

    2010-01-01

    Full Text Available One of the key prerequisite for a scalable, effective and efficient sensor network is the utilization of low-cost, low-overhead and high-resilient fault-inference techniques. To this end, we propose an intelligent agent system with a problem solving capability to address the issue of fault inference in sensor network environments. The intelligent agent system is designed and implemented at base-station side. The core of the agent system – problem solver – implements a fault-detection inference engine which harnesses Expectation Maximization (EM algorithm to estimate fault probabilities of sensor nodes. To validate the correctness and effectiveness of the intelligent agent system, a set of experiments in a wireless sensor testbed are conducted. The experimental results show that our intelligent agent system is able to precisely estimate the fault probability of sensor nodes.

  1. Self-triggering superconducting fault current limiter

    Science.gov (United States)

    Yuan, Xing [Albany, NY; Tekletsadik, Kasegn [Rexford, NY

    2008-10-21

    A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

  2. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  3. Methodology for Designing Fault-Protection Software

    Science.gov (United States)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  4. Stafford fault system: 120 million year fault movement history of northern Virginia

    Science.gov (United States)

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  5. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  6. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  7. Frontiers of quantum Monte Carlo workshop: preface

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1985-01-01

    The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics

  8. Avariide kiuste Monte Carlosse / Aare Arula

    Index Scriptorium Estoniae

    Arula, Aare

    2007-01-01

    Vt. ka Tehnika dlja Vsehh nr. 3, lk. 26-27. 26. jaanuaril 1937 Tallinnast Monte Carlo tähesõidule startinud Karl Siitanit ja tema meeskonda ootasid ees seiklused, mis oleksid neile peaaegu elu maksnud

  9. Monte Carlo code development in Los Alamos

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.

    1974-01-01

    The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)

  10. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  11. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  12. Monte Carlo Transport for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  13. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul

    2014-01-01

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error

  14. Aasta film - joonisfilm "Mont Blanc" / Verni Leivak

    Index Scriptorium Estoniae

    Leivak, Verni, 1966-

    2002-01-01

    Eesti Filmiajakirjanike Ühing andis aasta 2001 parima filmi tiitli Priit Tenderi joonisfilmile "Mont Blanc" : Eesti Joonisfilm 2001.Ka filmikriitikute eelistused kinodes ja televisioonis 2001. aastal näidatud filmide osas

  15. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  16. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  17. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  18. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  19. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  20. Reconfigurable fault tolerant avionics system

    Science.gov (United States)

    Ibrahim, M. M.; Asami, K.; Cho, Mengu

    This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

  1. Drilling the North Anatolian Fault

    Directory of Open Access Journals (Sweden)

    Mustafa Aktar

    2008-07-01

    Full Text Available An international workshop entitled “GONAF: A deep Geophysical Observatory at the North Anatolian Fault”, was held 23–27 April 2007 in Istanbul, Turkey. The aim of this workshop was to refine plans for a deep drilling project at the North Anatolian Fault Zone (NAFZ in northwestern Turkey. The current drilling target is located in the Marmara Sea offshore the megacity of Istanbul in the direct vicinity of the main branch of the North Anatolian Fault on the PrinceIslands (Figs. 1 and 2.The NAFZ represents a 1600-km-long plate boundary that slips at an average rate of 20–30 mm·yr-1 (McClusky et al., 2000. It has developed in the framework of the northward moving Arabian plate and the Hellenic subduction zone where the African lithosphere is subducting below the Aegean. Comparison of long-term slip rates with Holocene and GPS-derived slip rates indicate an increasing westwardmovement of the Anatolian plate with respect to stable Eurasia. During the twentieth century, the NAFZ has ruptured over 900 km of its length. A series of large earthquakes starting in 1939 near Erzincan in Eastern Anatolia propagated westward towards the Istanbul-Marmara region in northwestern Turkey that today represents a seismic gap along a ≥100-km-long segment below the Sea of Marmara. This segment did not rupture since 1766 and, if locked, may have accumulated a slip deficit of 4–5 m. It is believed being capable of generating two M≥7.4 earthquakes within the next decades (Hubert-Ferrari et al., 2000; however, it could even rupture in a large single event (Le Pichon et al., 1999.

  2. Fault failure with moderate earthquakes

    Science.gov (United States)

    Johnston, M. J. S.; Linde, A. T.; Gladwin, M. T.; Borcherdt, R. D.

    1987-12-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake ( ML = 6.7, Δ = 51 km), the August 4, 1985, Kettleman Hills earthquake ( ML = 5.5, Δ = 34 km), the April 1984 Morgan Hill earthquake ( ML = 6.1, Δ = 55 km), the November 1984 Round Valley earthquake ( ML = 5.8, Δ = 54 km), the January 14, 1978, Izu, Japan earthquake ( ML = 7.0, Δ = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10 -8), with borehole dilatometers (resolution 10 -10) and a 3-component borehole strainmeter (resolution 10 -9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure.

  3. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    Science.gov (United States)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  4. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  5. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    Science.gov (United States)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  6. Data Fault Detection in Medical Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2015-03-01

    Full Text Available Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians’ diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren’t changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M. Its mechanism includes: (1 use of a dynamic-local outlier factor (D-LOF algorithm to identify outlying sensed data vectors; (2 use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3 the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M.

  7. Fault tree analysis for vital area identification

    International Nuclear Information System (INIS)

    Varnado, G.B.; Ortiz, N.R.

    1978-01-01

    This paper discusses the use of fault tree analysis to identify those areas of nuclear fuel cycle facilities which must be protected to prevent acts of sabotage that could lead to sifnificant release of radioactive material. By proper manipulation of the fault trees for a plant, an analyst can identify vital areas in a manner consistent with regulatory definitions. This paper discusses the general procedures used in the analysis of any nuclear facility. In addition, a structured, generic approach to the development of the fault trees for nuclear power reactors is presented along with selected results of the application of the generic approach to several plants

  8. Concatenated codes for fault tolerant quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.; Zurek, W.

    1995-05-01

    The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

  9. Fault Tolerant Control of Wind Turbines

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Kinnaert, Michel

    2013-01-01

    This paper presents a test benchmark model for the evaluation of fault detection and accommodation schemes. This benchmark model deals with the wind turbine on a system level, and it includes sensor, actuator, and system faults, namely faults in the pitch system, the drive train, the generator......, and the converter system. Since it is a system-level model, converter and pitch system models are simplified because these are controlled by internal controllers working at higher frequencies than the system model. The model represents a three-bladed pitch-controlled variable-speed wind turbine with a nominal power...

  10. Fault Detection and Isolation for Spacecraft

    DEFF Research Database (Denmark)

    Jensen, Hans-Christian Becker; Wisniewski, Rafal

    2002-01-01

    This article realizes nonlinear Fault Detection and Isolation for actuators, given there is no measurement of the states in the actuators. The Fault Detection and Isolation of the actuators is instead based on angular velocity measurement of the spacecraft and knowledge about the dynamics...... of the satellite. The algorithms presented in this paper are based on a geometric approach to achieve nonlinear Fault Detection and Isolation. The proposed algorithms are tested in a simulation study and the pros and cons of the algorithms are discussed....

  11. Norm based Threshold Selection for Fault Detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Henrik

    1998-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

  12. Fault Tolerant Control: A Simultaneous Stabilization Result

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Blondel, V.D.

    2004-01-01

    This paper discusses the problem of designing fault tolerant compensators that stabilize a given system both in the nominal situation, as well as in the situation where one of the sensors or one of the actuators has failed. It is shown that such compensators always exist, provided that the system...... is detectable from each output and that it is stabilizable. The proof of this result is constructive, and a worked example shows how to design a fault tolerant compensator for a simple, yet challeging system. A family of second order systems is described that requires fault tolerant compensators of arbitrarily...

  13. Navigation System Fault Diagnosis for Underwater Vehicle

    DEFF Research Database (Denmark)

    Falkenberg, Thomas; Gregersen, Rene Tavs; Blanke, Mogens

    2014-01-01

    This paper demonstrates fault diagnosis on unmanned underwater vehicles (UUV) based on analysis of structure of the nonlinear dynamics. Residuals are generated using dierent approaches in structural analysis followed by statistical change detection. Hypothesis testing thresholds are made signal...... based to cope with non-ideal properties seen in real data. Detection of both sensor and thruster failures are demonstrated. Isolation is performed using the residual signature of detected faults and the change detection algorithm is used to assess severity of faults by estimating their magnitude...

  14. Mechanical Models of Fault-Related Folding

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.

    2003-01-09

    The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

  15. Scissoring Fault Rupture Properties along the Median Tectonic Line Fault Zone, Southwest Japan

    Science.gov (United States)

    Ikeda, M.; Nishizaka, N.; Onishi, K.; Sakamoto, J.; Takahashi, K.

    2017-12-01

    The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

  16. Determining on-fault magnitude distributions for a connected, multi-fault system

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2017-12-01

    A new method is developed to determine on-fault magnitude distributions within a complex and connected multi-fault system. A binary integer programming (BIP) method is used to distribute earthquakes from a 10 kyr synthetic regional catalog, with a minimum magnitude threshold of 6.0 and Gutenberg-Richter (G-R) parameters (a- and b-values) estimated from historical data. Each earthquake in the synthetic catalog can occur on any fault and at any location. In the multi-fault system, earthquake ruptures are allowed to branch or jump from one fault to another. The objective is to minimize the slip-rate misfit relative to target slip rates for each of the faults in the system. Maximum and minimum slip-rate estimates around the target slip rate are used as explicit constraints. An implicit constraint is that an earthquake can only be located on a fault (or series of connected faults) if it is long enough to contain that earthquake. The method is demonstrated in the San Francisco Bay area, using UCERF3 faults and slip-rates. We also invoke the same assumptions regarding background seismicity, coupling, and fault connectivity as in UCERF3. Using the preferred regional G-R a-value, which may be suppressed by the 1906 earthquake, the BIP problem is deemed infeasible when faults are not connected. Using connected faults, however, a solution is found in which there is a surprising diversity of magnitude distributions among faults. In particular, the optimal magnitude distribution for earthquakes that participate along the Peninsula section of the San Andreas fault indicates a deficit of magnitudes in the M6.0- 7.0 range. For the Rodgers Creek-Hayward fault combination, there is a deficit in the M6.0- 6.6 range. Rather than solving this as an optimization problem, we can set the objective function to zero and solve this as a constraint problem. Among the solutions to the constraint problem is one that admits many more earthquakes in the deficit magnitude ranges for both faults

  17. Uncertainties related to the fault tree reliability data

    International Nuclear Information System (INIS)

    Apostol, Minodora; Nitoi, Mirela; Farcasiu, M.

    2003-01-01

    Uncertainty analyses related to the fault trees evaluate the system variability which appears from the uncertainties of the basic events probabilities. Having a logical model which describes a system, to obtain outcomes means to evaluate it, using estimations for each basic event of the model. If the model has basic events that incorporate uncertainties, then the results of the model should incorporate the uncertainties of the events. Uncertainties estimation in the final result of the fault tree means first the uncertainties evaluation for the basic event probabilities and then combination of these uncertainties, to calculate the top event uncertainty. To calculate the propagating uncertainty, a knowledge of the probability density function as well as the range of possible values of the basic event probabilities is required. The following data are defined, using suitable probability density function: the components failure rates; the human error probabilities; the initiating event frequencies. It was supposed that the possible value distribution of the basic event probabilities is given by the lognormal probability density function. To know the range of possible value of the basic event probabilities, the error factor or the uncertainty factor is required. The aim of this paper is to estimate the error factor for the failure rates and for the human errors probabilities from the reliability data base used in Cernavoda Probabilistic Safety Evaluation. The top event chosen as an example is FEED3, from the Pressure and Inventory Control System. The quantitative evaluation of this top event was made by using EDFT code, developed in Institute for Nuclear Research Pitesti (INR). It was supposed that the error factors for the component failures are the same as for the failure rates. Uncertainty analysis was made with INCERT application, which uses the moment method and Monte Carlo method. The reliability data base used at INR Pitesti does not contain the error factors (ef

  18. Data-driven design of fault diagnosis and fault-tolerant control systems

    CERN Document Server

    Ding, Steven X

    2014-01-01

    Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems presents basic statistical process monitoring, fault diagnosis, and control methods, and introduces advanced data-driven schemes for the design of fault diagnosis and fault-tolerant control systems catering to the needs of dynamic industrial processes. With ever increasing demands for reliability, availability and safety in technical processes and assets, process monitoring and fault-tolerance have become important issues surrounding the design of automatic control systems. This text shows the reader how, thanks to the rapid development of information technology, key techniques of data-driven and statistical process monitoring and control can now become widely used in industrial practice to address these issues. To allow for self-contained study and facilitate implementation in real applications, important mathematical and control theoretical knowledge and tools are included in this book. Major schemes are presented in algorithm form and...

  19. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    Science.gov (United States)

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Guaranteed Cost Fault-Tolerant Control for Networked Control Systems with Sensor Faults

    Directory of Open Access Journals (Sweden)

    Qixin Zhu

    2015-01-01

    Full Text Available For the large scale and complicated structure of networked control systems, time-varying sensor faults could inevitably occur when the system works in a poor environment. Guaranteed cost fault-tolerant controller for the new networked control systems with time-varying sensor faults is designed in this paper. Based on time delay of the network transmission environment, the networked control systems with sensor faults are modeled as a discrete-time system with uncertain parameters. And the model of networked control systems is related to the boundary values of the sensor faults. Moreover, using Lyapunov stability theory and linear matrix inequalities (LMI approach, the guaranteed cost fault-tolerant controller is verified to render such networked control systems asymptotically stable. Finally, simulations are included to demonstrate the theoretical results.