WorldWideScience

Sample records for parallel fault analysis

  1. Evaluation of fault-normal/fault-parallel directions rotated ground motions for response history analysis of an instrumented six-story building

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2012-01-01

    According to regulatory building codes in United States (for example, 2010 California Building Code), at least two horizontal ground-motion components are required for three-dimensional (3D) response history analysis (RHA) of buildings. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHA analyses should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak responses of engineering demand parameters (EDPs) were obtained for rotation angles ranging from 0° through 180° for evaluating the FN/FP directions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  2. Pros and cons of rotating ground motion records to fault-normal/parallel directions for response history analysis of buildings

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2014-01-01

    According to the regulatory building codes in the United States (e.g., 2010 California Building Code), at least two horizontal ground motion components are required for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHAs should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here, for the first time, using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak values of engineering demand parameters (EDPs) were computed for rotation angles ranging from 0 through 180° to quantify the difference between peak values of EDPs over all rotation angles and those due to FN/FP direction rotated motions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  3. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  4. Fault isolation in parallel coupled wind turbine converters

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Thøgersen, Paul Bach; Stoustrup, Jakob

    2010-01-01

    Parallel converters in wind turbine give a number advantages, such as fault tolerance due to the redundant converters. However, it might be difficult to isolate gain faults in one of the converters if only a combined power measurement is available. In this paper a scheme using orthogonal power...... references to the converters is proposed. Simulations on a wind turbine with 5 parallel converters show a clear potential of this scheme for isolation of this gain fault to the correct converter in which the fault occurs....

  5. Fault tree analysis

    International Nuclear Information System (INIS)

    1981-09-01

    Suggestion are made concerning the method of the fault tree analysis, the use of certain symbols in the examination of system failures. This purpose of the fault free analysis is to find logical connections of component or subsystem failures leading to undesirable occurrances. The results of these examinations are part of the system assessment concerning operation and safety. The objectives of the analysis are: systematical identification of all possible failure combinations (causes) leading to a specific undesirable occurrance, finding of reliability parameters such as frequency of failure combinations, frequency of the undesirable occurrance or non-availability of the system when required. The fault tree analysis provides a near and reconstructable documentation of the examination. (orig./HP) [de

  6. Interactive animation of fault-tolerant parallel algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Apgar, S.W.

    1992-02-01

    Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes.

  7. Introduction to fault tree analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Lambert, H.E.

    1975-01-01

    An elementary, engineering oriented introduction to fault tree analysis is presented. The basic concepts, techniques and applications of fault tree analysis, FTA, are described. The two major steps of FTA are identified as (1) the construction of the fault tree and (2) its evaluation. The evaluation of the fault tree can be qualitative or quantitative depending upon the scope, extensiveness and use of the analysis. The advantages, limitations and usefulness of FTA are discussed

  8. The role of bed-parallel slip in the development of complex normal fault zones

    Science.gov (United States)

    Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros

    2017-04-01

    Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.

  9. Online Diagnosis for the Capacity Fade Fault of a Parallel-Connected Lithium Ion Battery Group

    Directory of Open Access Journals (Sweden)

    Hua Zhang

    2016-05-01

    Full Text Available In a parallel-connected battery group (PCBG, capacity degradation is usually caused by the inconsistency between a faulty cell and other normal cells, and the inconsistency occurs due to two potential causes: an aging inconsistency fault or a loose contacting fault. In this paper, a novel method is proposed to perform online and real-time capacity fault diagnosis for PCBGs. Firstly, based on the analysis of parameter variation characteristics of a PCBG with different fault causes, it is found that PCBG resistance can be taken as an indicator for both seeking the faulty PCBG and distinguishing the fault causes. On one hand, the faulty PCBG can be identified by comparing the PCBG resistance among PCBGs; on the other hand, two fault causes can be distinguished by comparing the variance of the PCBG resistances. Furthermore, for online applications, a novel recursive-least-squares algorithm with restricted memory and constraint (RLSRMC, in which the constraint is added to eliminate the “imaginary number” phenomena of parameters, is developed and used in PCBG resistance identification. Lastly, fault simulation and validation results demonstrate that the proposed methods have good accuracy and reliability.

  10. Landforms along transverse faults parallel to axial zone of folded ...

    Indian Academy of Sciences (India)

    Himalaya, along the Kali River valley, is defined by folded hanging wall ... role of transverse fault tectonics in the formation of the curvature cannot be ruled out. 1. .... Piedmont surface is made up of gravelliferous and ... made to compute the wedge failure analysis (Hoek .... (∼T2) is at the elevation of ∼272 m asl measured.

  11. Fault Analysis in Cryptography

    CERN Document Server

    Joye, Marc

    2012-01-01

    In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

  12. Fault detection for hydraulic pump based on chaotic parallel RBF network

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2011-01-01

    Full Text Available Abstract In this article, a parallel radial basis function network in conjunction with chaos theory (CPRBF network is presented, and applied to practical fault detection for hydraulic pump, which is a critical component in aircraft. The CPRBF network consists of a number of radial basis function (RBF subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of CPRBF is a weighted sum of all RBF subnets. It was first trained using the dataset from normal state without fault, and then a residual error generator was designed to detect failures based on the trained CPRBF network. Then, failure detection can be achieved by the analysis of the residual error. Finally, two case studies are introduced to compare the proposed CPRBF network with traditional RBF networks, in terms of prediction and detection accuracy.

  13. Differential Fault Analysis on CLEFIA

    Science.gov (United States)

    Chen, Hua; Wu, Wenling; Feng, Dengguo

    CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

  14. A Fault-Tolerant Parallel Structure of Single-Phase Full-Bridge Rectifiers for a Wound-Field Doubly Salient Generator

    DEFF Research Database (Denmark)

    Chen, Zhihui; Chen, Ran; Chen, Zhe

    2013-01-01

    The fault-tolerance design is widely adopted for high-reliability applications. In this paper, a parallel structure of single-phase full-bridge rectifiers (FBRs) (PS-SPFBR) is proposed for a wound-field doubly salient generator. The analysis shows the potential fault-tolerance capability of the PS...

  15. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  16. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  17. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  18. Fault Recoverability Analysis via Cross-Gramian

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza

    2016-01-01

    Engineering systems are vulnerable to different kinds of faults. Faults may compromise safety, cause sub-optimal operation and decline in performance if not preventing the whole system from functioning. Fault tolerant control (FTC) methods ensure that the system performance maintains within...... with feedback control. Fault recoverability provides important and useful information which could be used in analysis and design. However, computing fault recoverability is numerically expensive. In this paper, a new approach for computation of fault recoverability for bilinear systems is proposed...... approach for computation of fault recoverability is proposed which reduces the computational burden significantly. The proposed results are used for an electro-hydraulic drive to reveal the redundant actuating capabilities in the system....

  19. FPGAs and parallel architectures for aerospace applications soft errors and fault-tolerant design

    CERN Document Server

    Rech, Paolo

    2016-01-01

    This book introduces the concepts of soft errors in FPGAs, as well as the motivation for using commercial, off-the-shelf (COTS) FPGAs in mission-critical and remote applications, such as aerospace.  The authors describe the effects of radiation in FPGAs, present a large set of soft-error mitigation techniques that can be applied in these circuits, as well as methods for qualifying these circuits under radiation.  Coverage includes radiation effects in FPGAs, fault-tolerant techniques for FPGAs, use of COTS FPGAs in aerospace applications, experimental data of FPGAs under radiation, FPGA embedded processors under radiation, and fault injection in FPGAs. Since dedicated parallel processing architectures such as GPUs have become more desirable in aerospace applications due to high computational power, GPU analysis under radiation is also discussed. ·         Discusses features and drawbacks of reconfigurability methods for FPGAs, focused on aerospace applications; ·         Explains how radia...

  20. Local rollback for fault-tolerance in parallel computing systems

    Science.gov (United States)

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  1. Quaternary faulting in the Tatra Mountains, evidence from cave morphology and fault-slip analysis

    Directory of Open Access Journals (Sweden)

    Szczygieł Jacek

    2015-06-01

    Full Text Available Tectonically deformed cave passages in the Tatra Mts (Central Western Carpathians indicate some fault activity during the Quaternary. Displacements occur in the youngest passages of the caves indicating (based on previous U-series dating of speleothems an Eemian or younger age for those faults, and so one tectonic stage. On the basis of stress analysis and geomorphological observations, two different mechanisms are proposed as responsible for the development of these displacements. The first mechanism concerns faults that are located above the valley bottom and at a short distance from the surface, with fault planes oriented sub-parallel to the slopes. The radial, horizontal extension and vertical σ1 which is identical with gravity, indicate that these faults are the result of gravity sliding probably caused by relaxation after incision of valleys, and not directly from tectonic activity. The second mechanism is tilting of the Tatra Mts. The faults operated under WNW-ESE oriented extension with σ1 plunging steeply toward the west. Such a stress field led to normal dip-slip or oblique-slip displacements. The faults are located under the valley bottom and/or opposite or oblique to the slopes. The process involved the pre-existing weakest planes in the rock complex: (i in massive limestone mostly faults and fractures, (ii in thin-bedded limestone mostly inter-bedding planes. Thin-bedded limestones dipping steeply to the south are of particular interest. Tilting toward the N caused the hanging walls to move under the massif and not toward the valley, proving that the cause of these movements was tectonic activity and not gravity.

  2. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  3. Fault tree analysis for vital area identification

    International Nuclear Information System (INIS)

    Varnado, G.B.; Ortiz, N.R.

    1978-01-01

    This paper discusses the use of fault tree analysis to identify those areas of nuclear fuel cycle facilities which must be protected to prevent acts of sabotage that could lead to sifnificant release of radioactive material. By proper manipulation of the fault trees for a plant, an analyst can identify vital areas in a manner consistent with regulatory definitions. This paper discusses the general procedures used in the analysis of any nuclear facility. In addition, a structured, generic approach to the development of the fault trees for nuclear power reactors is presented along with selected results of the application of the generic approach to several plants

  4. Modular representation and analysis of fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Olmos, J; Wolf, L [Massachusetts Inst. of Tech., Cambridge (USA). Dept. of Nuclear Engineering

    1978-08-01

    An analytical method to describe fault tree diagrams in terms of their modular compositions is developed. Fault tree structures are characterized by recursively relating the top tree event to all its basic component inputs through a set of equations defining each of the modulus for the fault tree. It is shown that such a modular description is an extremely valuable tool for making a quantitative analysis of fault trees. The modularization methodology has been implemented into the PL-MOD computer code, written in PL/1 language, which is capable of modularizing fault trees containing replicated components and replicated modular gates. PL-MOD in addition can handle mutually exclusive inputs and explicit higher order symmetric (k-out-of-n) gates. The step-by-step modularization of fault trees performed by PL-MOD is demonstrated and it is shown how this procedure is only made possible through an extensive use of the list processing tools available in PL/1. A number of nuclear reactor safety system fault trees were analyzed. PL-MOD performed the modularization and evaluation of the modular occurrence probabilities and Vesely-Fussell importance measures for these systems very efficiently. In particular its execution time for the modularization of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using MOCUS, a code considered to be fast by present standards.

  5. Posbist fault tree analysis of coherent systems

    International Nuclear Information System (INIS)

    Huang, H.-Z.; Tong Xin; Zuo, Ming J.

    2004-01-01

    When the failure probability of a system is extremely small or necessary statistical data from the system is scarce, it is very difficult or impossible to evaluate its reliability and safety with conventional fault tree analysis (FTA) techniques. New techniques are needed to predict and diagnose such a system's failures and evaluate its reliability and safety. In this paper, we first provide a concise overview of FTA. Then, based on the posbist reliability theory, event failure behavior is characterized in the context of possibility measures and the structure function of the posbist fault tree of a coherent system is defined. In addition, we define the AND operator and the OR operator based on the minimal cut of a posbist fault tree. Finally, a model of posbist fault tree analysis (posbist FTA) of coherent systems is presented. The use of the model for quantitative analysis is demonstrated with a real-life safety system

  6. Linear discriminant analysis for welding fault detection

    International Nuclear Information System (INIS)

    Li, X.; Simpson, S.W.

    2010-01-01

    This work presents a new method for real time welding fault detection in industry based on Linear Discriminant Analysis (LDA). A set of parameters was calculated from one second blocks of electrical data recorded during welding and based on control data from reference welds under good conditions, as well as faulty welds. Optimised linear combinations of the parameters were determined with LDA and tested with independent data. Short arc welds in overlap joints were studied with various power sources, shielding gases, wire diameters, and process geometries. Out-of-position faults were investigated. Application of LDA fault detection to a broad range of welding procedures was investigated using a similarity measure based on Principal Component Analysis. The measure determines which reference data are most similar to a given industrial procedure and the appropriate LDA weights are then employed. Overall, results show that Linear Discriminant Analysis gives an effective and consistent performance in real-time welding fault detection.

  7. Distributed bearing fault diagnosis based on vibration analysis

    Science.gov (United States)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  8. Commercial application of fault tree analysis

    International Nuclear Information System (INIS)

    Crosetti, P.A.; Bruce, R.A.

    1970-01-01

    The potential for general application of Fault Tree Analysis to commercial products appears attractive based not only on the successful extension from the aerospace safety technology to the nuclear reactor reliability and availability technology, but also because combinatorial hazards are common to commercial operations and therefore lend themselves readily to evaluation by Fault Tree Analysis. It appears reasonable to conclude that the technique has application within the commercial industrial community where the occurrence of a specified consequence or final event would be of sufficient concern to management to justify such a rigorous analysis as an aid to decision making. (U.S.)

  9. Fault tree analysis: concepts and techniques

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1976-01-01

    Concepts and techniques of fault tree analysis have been developed over the past decade and now predictions from this type analysis are important considerations in the design of many systems such as aircraft, ships and their electronic systems, missiles, and nuclear reactor systems. Routine, hardware-oriented fault tree construction can be automated; however, considerable effort is needed in this area to get the methodology into production status. When this status is achieved, the entire analysis of hardware systems will be automated except for the system definition step. Automated analysis is not undesirable; to the contrary, when verified on adequately complex systems, automated analysis could well become a routine analysis. It could also provide an excellent start for a more in-depth fault tree analysis that includes environmental effects, common mode failure, and human errors. The automated analysis is extremely fast and frees the analyst from the routine hardware-oriented fault tree construction, as well as eliminates logic errors and errors of oversight in this part of the analysis. Automated analysis then affords the analyst a powerful tool to allow his prime efforts to be devoted to unearthing more subtle aspects of the modes of failure of the system

  10. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    Science.gov (United States)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  11. Fault tree analysis of a research reactor

    International Nuclear Information System (INIS)

    Hall, J.A.; O'Dacre, D.F.; Chenier, R.J.; Arbique, G.M.

    1986-08-01

    Fault Tree Analysis Techniques have been used to assess the safety system of the ZED-2 Research Reactor at the Chalk River Nuclear Laboratories. This turned out to be a strong test of the techniques involved. The resulting fault tree was large and because of inter-links in the system structure the tree was not modularized. In addition, comprehensive documentation was required. After a brief overview of the reactor and the analysis, this paper concentrates on the computer tools that made the job work. Two types of tools were needed; text editing and forms management capability for large volumes of component and system data, and the fault tree codes themselves. The solutions (and failures) are discussed along with the tools we are already developing for the next analysis

  12. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  13. Cafts: computer aided fault tree analysis

    International Nuclear Information System (INIS)

    Poucet, A.

    1985-01-01

    The fault tree technique has become a standard tool for the analysis of safety and reliability of complex system. In spite of the costs, which may be high for a complete and detailed analysis of a complex plant, the fault tree technique is popular and its benefits are fully recognized. Due to this applications of these codes have mostly been restricted to simple academic examples and rarely concern complex, real world systems. In this paper an interactive approach to fault tree construction is presented. The aim is not to replace the analyst, but to offer him an intelligent tool which can assist him in modeling complex systems. Using the CAFTS-method, the analyst interactively constructs a fault tree in two phases: (1) In a first phase he generates an overall failure logic structure of the system; the macrofault tree. In this phase, CAFTS features an expert system approach to assist the analyst. It makes use of a knowledge base containing generic rules on the behavior of subsystems and components; (2) In a second phase the macrofault tree is further refined and transformed in a fully detailed and quantified fault tree. In this phase a library of plant-specific component failure models is used

  14. Guideliness for system modeling: fault tree [analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

  15. Fault tree analysis for vital area identification

    International Nuclear Information System (INIS)

    Varnado, G.B.; Ortiz, N.R.

    1978-01-01

    The use of fault tree analysis techniques to systematically identify (1) the sabotage events which can lead to release of significant quantities of radioactive materials, (2) the areas of the nuclear power plant in which the sabotage events can be accomplished, and (3) the areas of the plant which must be protected to assure that release does not occur are discussed

  16. Guideliness for system modeling: fault tree [analysis

    International Nuclear Information System (INIS)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard

  17. Fault tree analysis for urban flooding

    NARCIS (Netherlands)

    Ten Veldhuis, J.A.E.; Clemens, F.H.L.R.; Van Gelder, P.H.A.J.M.

    2008-01-01

    Traditional methods to evaluate flood risk mostly focus on storm events as the main cause of flooding. Fault tree analysis is a technique that is able to model all potential causes of flooding and to quantify both the overall probability of flooding and the contributions of all causes of flooding to

  18. Failure diagnosis and fault tree analysis

    International Nuclear Information System (INIS)

    Weber, G.

    1982-07-01

    In this report a methodology of failure diagnosis for complex systems is presented. Systems which can be represented by fault trees are considered. This methodology is based on switching algebra, failure diagnosis of digital circuits and fault tree analysis. Relations between these disciplines are shown. These relations are due to Boolean algebra and Boolean functions used throughout. It will be shown on this basis that techniques of failure diagnosis and fault tree analysis are useful to solve the following problems: 1. describe an efficient search of all failed components if the system is failed. 2. Describe an efficient search of all states which are close to a system failure if the system is still operating. The first technique will improve the availability, the second the reliability and safety. For these problems, the relation to methods of failure diagnosis for combinational circuits is required. Moreover, the techniques are demonstrated for a number of systems which can be represented by fault trees. (orig./RW) [de

  19. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  20. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip.

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  1. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

    International Nuclear Information System (INIS)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C.

    2004-01-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis

  2. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  3. Computer-aided Fault Tree Analysis

    International Nuclear Information System (INIS)

    Willie, R.R.

    1978-08-01

    A computer-oriented methodology for deriving minimal cut and path set families associated with arbitrary fault trees is discussed first. Then the use of the Fault Tree Analysis Program (FTAP), an extensive FORTRAN computer package that implements the methodology is described. An input fault tree to FTAP may specify the system state as any logical function of subsystem or component state variables or complements of these variables. When fault tree logical relations involve complements of state variables, the analyst may instruct FTAP to produce a family of prime implicants, a generalization of the minimal cut set concept. FTAP can also identify certain subsystems associated with the tree as system modules and provide a collection of minimal cut set families that essentially expresses the state of the system as a function of these module state variables. Another FTAP feature allows a subfamily to be obtained when the family of minimal cut sets or prime implicants is too large to be found in its entirety; this subfamily consists only of sets that are interesting to the analyst in a special sense

  4. Modular techniques for dynamic fault-tree analysis

    Science.gov (United States)

    Patterson-Hine, F. A.; Dugan, Joanne B.

    1992-01-01

    It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.

  5. An impact analysis of the fault impedance on voltage sags

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Alessandro Candido Lopes [CELG - Companhia Energetica de Goias, Goiania, GO (Brazil). Generation and Transmission. System' s Operation Center], E-mail: alessandro.clr@celg.com.br; Batista, Adalberto Jose [Federal University of Goias (UFG), Goiania, GO (Brazil)], E-mail: batista@eee.ufg.br; Leborgne, Roberto Chouhy [Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil)], E-mail: rcl@ece.ufrgs.br; Emiliano, Pedro Henrique Mota, E-mail: ph@phph.com.br

    2009-07-01

    This paper presents an impact analysis of the fault impedance, in terms of its module and angle, on voltage sags caused by faults. Symmetrical and asymmetrical faults are simulated, at transmission and distribution lines, by using a frequency-domain fault simulation software called ANAFAS. Voltage sags are monitored at buses where sensitive end-users are connected. In order to overcome some intrinsic limitations of this software concerning its automatic execution for several cases, a computational tool was developed in Java programming language. This solution allows the automatic simulation of cases including the effect of the fault position, the fault type, and the proper fault impedance. The main conclusion is that the module and angle of the fault impedance can have a significant influence on voltage sag depending on the fault characteristics. (author)

  6. Seismic fault analysis of Chicoutimi region

    International Nuclear Information System (INIS)

    Woussen, G.; Ngandee, S.

    1996-01-01

    On November 25, 1988, an earthquake measuring 6.5 on the Richter Scale occurred at a depth of 29 km in Precambrian bedrock in the Saguenay Region (Quebec). Given that the seismic event was located near a major zone of normal faults, it is important to determine if the earthquake could be associated with this large structure or with faults associated with this structure. This is discussed through a compilation and interpretation of structural discontinuities on key outcrops in the vicinity of the epicenter. The report is broken in four parts. The first part gives a brief overview of the geology in order to provide a geologic context for the structural measurements. The second comprises an analysis of fractures in each of the three lithotectonic units defined in the first part. The third part discusses the data and the fourth provides a conclusion. 30 refs., 53 figs

  7. Open-circuit fault detection and tolerant operation for a parallel-connected SAB DC-DC converter

    DEFF Research Database (Denmark)

    Park, Kiwoo; Chen, Zhe

    2014-01-01

    This paper presents an open-circuit fault detection method and its tolerant control strategy for a Parallel-Connected Single Active Bridge (PCSAB) dc-dc converter. The structural and operational characteristics of the PCSAB converter lead to several advantages especially for high power applicatio...

  8. Parallel-Sequential Texture Analysis

    NARCIS (Netherlands)

    van den Broek, Egon; Singh, Sameer; Singh, Maneesha; van Rikxoort, Eva M.; Apte, Chid; Perner, Petra

    2005-01-01

    Color induced texture analysis is explored, using two texture analysis techniques: the co-occurrence matrix and the color correlogram as well as color histograms. Several quantization schemes for six color spaces and the human-based 11 color quantization scheme have been applied. The VisTex texture

  9. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  10. Extension parallel to the rift zone during segmented fault growth: application to the evolution of the NE Atlantic

    Directory of Open Access Journals (Sweden)

    A. Bubeck

    2017-11-01

    Full Text Available The mechanical interaction of propagating normal faults is known to influence the linkage geometry of first-order faults, and the development of second-order faults and fractures, which transfer displacement within relay zones. Here we use natural examples of growth faults from two active volcanic rift zones (Koa`e, island of Hawai`i, and Krafla, northern Iceland to illustrate the importance of horizontal-plane extension (heave gradients, and associated vertical axis rotations, in evolving continental rift systems. Second-order extension and extensional-shear faults within the relay zones variably resolve components of regional extension, and components of extension and/or shortening parallel to the rift zone, to accommodate the inherently three-dimensional (3-D strains associated with relay zone development and rotation. Such a configuration involves volume increase, which is accommodated at the surface by open fractures; in the subsurface this may be accommodated by veins or dikes oriented obliquely and normal to the rift axis. To consider the scalability of the effects of relay zone rotations, we compare the geometry and kinematics of fault and fracture sets in the Koa`e and Krafla rift zones with data from exhumed contemporaneous fault and dike systems developed within a > 5×104 km2 relay system that developed during formation of the NE Atlantic margins. Based on the findings presented here we propose a new conceptual model for the evolution of segmented continental rift basins on the NE Atlantic margins.

  11. The Analysis of The Fault of Electrical Power Steering

    Directory of Open Access Journals (Sweden)

    Zhang Li Wen

    2016-01-01

    Full Text Available This paper analysis the common fault types of primary Electrical Power Steering system, meanwhile classify every fault. It provides the basis for further troubleshooting and maintenance. At the same time this paper propose a practical working principle of fault-tolerant, in order to make the EPS system more security and durability.

  12. Fuzzy Uncertainty Evaluation for Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.

  13. Automated fault tree analysis: the GRAFTER system

    International Nuclear Information System (INIS)

    Sancaktar, S.; Sharp, D.R.

    1985-01-01

    An inherent part of probabilistic risk assessment (PRA) is the construction and analysis of detailed fault trees. For this purpose, a fault tree computer graphics code named GRAFTER has been developed. The code system centers around the GRAFTER code. This code is used interactively to construct, store, update and print fault trees of small or large sizes. The SIMON code is used to provide data for the basic event probabilities. ENCODE is used to process the GRAFTER files to prepare input for the WAMCUT code. WAMCUT is used to quantify the top event probability and to identify the cutsets. This code system has been extensively used in various PRA projects. It has resulted in reduced manpower costs, increased QA capability, ease of documentation and it has simplified sensitivity analyses. Because of its automated nature, it is also suitable for LIVING PRA Studies which require updating and modifications during the lifetime of the plant. Brief descriptions and capabilities of the GRAFTER, SIMON and ENCODE codes are provided; an application of the GRAFTER system is outlined; and conclusions and comments on the code system are given

  14. Fault tree analysis of KNICS RPS software

    International Nuclear Information System (INIS)

    Park, Gee Yong; Kwon, Kee Choon; Koh, Kwang Yong; Jee, Eun Kyoung; Seong, Poong Hyun; Lee, Dae Hyung

    2008-01-01

    This paper describes the application of a software Fault Tree Analysis (FTA) as one of the analysis techniques for a Software Safety Analysis (SSA) at the design phase and its analysis results for the safety-critical software of a digital reactor protection system, which is called the KNICS RPS, being developed in the KNICS (Korea Nuclear Instrumentation and Control Systems) project. The software modules in the design description were represented by Function Blocks (FBs), and the software FTA was performed based on the well-defined fault tree templates for the FBs. The SSA, which is part of the verification and validation (V and V) activities, was activated at each phase of the software lifecycle for the KNICS RPS. At the design phase, the software HAZOP (Hazard and Operability) and the software FTA were employed in the SSA in such a way that the software HAZOP was performed first and then the software FTA was applied. The software FTA was applied to some critical modules selected from the software HAZOP analysis

  15. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    Science.gov (United States)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  16. Parallel single-cell analysis microfluidic platform

    NARCIS (Netherlands)

    van den Brink, Floris Teunis Gerardus; Gool, Elmar; Frimat, Jean-Philippe; Bomer, Johan G.; van den Berg, Albert; le Gac, Severine

    2011-01-01

    We report a PDMS microfluidic platform for parallel single-cell analysis (PaSCAl) as a powerful tool to decipher the heterogeneity found in cell populations. Cells are trapped individually in dedicated pockets, and thereafter, a number of invasive or non-invasive analysis schemes are performed.

  17. Fault tree analysis for reactor systems

    International Nuclear Information System (INIS)

    Crosetti, P.A.

    1971-01-01

    Reliability analysis is playing an increasingly important role in quantitative assessment of system performance for assuring nuclear safety, improving plant performance and plant life, and reducing plant operating costs. The complexity of today's nuclear plants warrant the use of techniques which will provide a comprehensive evaluation of systems in their total context. In particular, fault tree analysis with probability evaluation can play a key role in assuring nuclear safety, in improving plant performance and plant life, and in reducing plant operating costs. The technique provides an all inclusive, versatile mathematical tool for analyzing complex systems. Its application can include a complete plant as well as any of the systems and subsystems. Fault tree analysis provides an objective basis for analyzing system design, performing trade-off studies, analyzing common mode failures, demonstrating compliance with AEC requirements, and justifying system changes or additions. The logic of the approach makes it readily understandable and, therefore, it serves as an effective visibility tool for both engineering and management. (U.S.)

  18. Power system reliability analysis using fault trees

    International Nuclear Information System (INIS)

    Volkanovski, A.; Cepin, M.; Mavko, B.

    2006-01-01

    The power system reliability analysis method is developed from the aspect of reliable delivery of electrical energy to customers. The method is developed based on the fault tree analysis, which is widely applied in the Probabilistic Safety Assessment (PSA). The method is adapted for the power system reliability analysis. The method is developed in a way that only the basic reliability parameters of the analysed power system are necessary as an input for the calculation of reliability indices of the system. The modeling and analysis was performed on an example power system consisting of eight substations. The results include the level of reliability of current power system configuration, the combinations of component failures resulting in a failed power delivery to loads, and the importance factors for components and subsystems. (author)

  19. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  20. Landforms along transverse faults parallel to axial zone of folded mountain front, north-eastern Kumaun Sub-Himalaya, India

    Science.gov (United States)

    Luirei, Khayingshing; Bhakuni, S. S.; Negi, Sanjay S.

    2017-02-01

    The shape of the frontal part of the Himalaya around the north-eastern corner of the Kumaun Sub-Himalaya, along the Kali River valley, is defined by folded hanging wall rocks of the Himalayan Frontal Thrust (HFT). Two parallel faults (Kalaunia and Tanakpur faults) trace along the axial zone of the folded HFT. Between these faults, the hinge zone of this transverse fold is relatively straight and along these faults, the beds abruptly change their attitudes and their widths are tectonically attenuated across two hinge lines of fold. The area is constituted of various surfaces of coalescing fans and terraces. Fans comprise predominantly of sandstone clasts laid down by the steep-gradient streams originating from the Siwalik range. The alluvial fans are characterised by compound and superimposed fans with high relief, which are generated by the tectonic activities associated with the thrusting along the HFT. The truncated fan along the HFT has formed a 100 m high-escarpment running E-W for ˜5 km. Quaternary terrace deposits suggest two phases of tectonic uplift in the basal part of the hanging wall block of the HFT dipping towards the north. The first phase is represented by tilting of the terrace sediments by ˜30 ∘ towards the NW; while the second phase is evident from deformed structures in the terrace deposit comprising mainly of reverse faults, fault propagation folds, convolute laminations, flower structures and back thrust faults. The second phase produced ˜1.0 m offset of stratification of the terrace along a thrust fault. Tectonic escarpments are recognised across the splay thrust near south of the HFT trace. The south facing hill slopes exhibit numerous landslides along active channels incising the hanging wall rocks of the HFT. The study area shows weak seismicity. The major Moradabad Fault crosses near the study area. This transverse fault may have suppressed the seismicity in the Tanakpur area, and the movement along the Moradabad and Kasganj

  1. Parallel processor for fast event analysis

    International Nuclear Information System (INIS)

    Hensley, D.C.

    1983-01-01

    Current maximum data rates from the Spin Spectrometer of approx. 5000 events/s (up to 1.3 MBytes/s) and minimum analysis requiring at least 3000 operations/event require a CPU cycle time near 70 ns. In order to achieve an effective cycle time of 70 ns, a parallel processing device is proposed where up to 4 independent processors will be implemented in parallel. The individual processors are designed around the Am2910 Microsequencer, the AM29116 μP, and the Am29517 Multiplier. Satellite histogramming in a mass memory system will be managed by a commercial 16-bit μP system

  2. Common faults in turbines and applying neural networks in order to fault diagnostic by vibration analysis

    International Nuclear Information System (INIS)

    Masoudifar, M.; AghaAmini, M.

    2001-01-01

    Today the fault diagnostic of the rotating machinery based on the vibration analysis is an effective method in designing predictive maintenance programs. In this method, vibration level of the turbines is monitored and if it is higher than the allowable limit, vibrational data will be analyzed and the growing faults will be detected. But because of the high complexity of the system monitoring, the interpretation of the measured data is more difficult. Therefore, design of the fault diagnostic expert systems by using the expert's technical experiences and knowledge; seem to be the best solution. In this paper,at first several common faults in turbines are studied and the how applying the neural networks to interpret the vibrational data for fault diagnostic is explained

  3. Fault tree analysis with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1979-02-01

    A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de

  4. Methods of fault tree analysis and their limits

    International Nuclear Information System (INIS)

    Weber, G.G.

    1984-12-01

    Some recent methodological developments of fault tree analysis are discussed and limits of fault tree analysis and a criterion for admissibility of structure functions are given. It is shown that there are interesting relations to switching theory and to stochastic processes. (orig./HP) [de

  5. Parallel interactive data analysis with PROOF

    International Nuclear Information System (INIS)

    Ballintijn, Maarten; Biskup, Marek; Brun, Rene; Canal, Philippe; Feichtinger, Derek; Ganis, Gerardo; Kickinger, Guenter; Peters, Andreas; Rademakers, Fons

    2006-01-01

    The Parallel ROOT Facility, PROOF, enables the analysis of much larger data sets on a shorter time scale. It exploits the inherent parallelism in data of uncorrelated events via a multi-tier architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. The system provides transparent and interactive access to gigabytes today. Being part of the ROOT framework PROOF inherits the benefits of a performant object storage system and a wealth of statistical and visualization tools. This paper describes the data analysis model of ROOT and the latest developments on closer integration of PROOF into that model and the ROOT user environment, e.g. support for PROOF-based browsing of trees stored remotely, and the popular TTree::Draw() interface. We also outline the ongoing developments aimed to improve the flexibility and user-friendliness of the system

  6. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  7. Quaternary faulting in the Tatra Mountains, evidence from cave morphology and fault-slip analysis

    OpenAIRE

    Szczygieł Jacek

    2015-01-01

    Tectonically deformed cave passages in the Tatra Mts (Central Western Carpathians) indicate some fault activity during the Quaternary. Displacements occur in the youngest passages of the caves indicating (based on previous U-series dating of speleothems) an Eemian or younger age for those faults, and so one tectonic stage. On the basis of stress analysis and geomorphological observations, two different mechanisms are proposed as responsible for the development of these displacements. The firs...

  8. Extensions to the Parallel Real-Time Artificial Intelligence System (PRAIS) for fault-tolerant heterogeneous cycle-stealing reasoning

    Science.gov (United States)

    Goldstein, David

    1991-01-01

    Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.

  9. Analysis of Retransmission Policies for Parallel Data Transmission

    Directory of Open Access Journals (Sweden)

    I. A. Halepoto

    2018-06-01

    Full Text Available Stream control transmission protocol (SCTP is a transport layer protocol, which is efficient, reliable, and connection-oriented as compared to transmission control protocol (TCP and user datagram protocol (UDP. Additionally, SCTP has more innovative features like multihoming, multistreaming and unordered delivery. With multihoming, SCTP establishes multiple paths between a sender and receiver. However, it only uses the primary path for data transmission and the secondary path (or paths for fault tolerance. Concurrent multipath transfer extension of SCTP (CMT-SCTP allows a sender to transmit data in parallel over multiple paths, which increases the overall transmission throughput. Parallel data transmission is beneficial for higher data rates. Parallel transmission or connection is also good in services such as video streaming where if one connection is occupied with errors the transmission continues on alternate links. With parallel transmission, the unordered data packets arrival is very common at receiver. The receiver has to wait until the missing data packets arrive, causing performance degradation while using CMT-SCTP. In order to reduce the transmission delay at the receiver, CMT-SCTP uses intelligent retransmission polices to immediately retransmit the missing packets. The retransmission policies used by CMT-SCTP are RTX-SSTHRESH, RTX-LOSSRATE and RTX-CWND. The main objective of this paper is the performance analysis of the retransmission policies. This paper evaluates RTX-SSTHRESH, RTX-LOSSRATE and RTX-CWND. Simulations are performed on the Network Simulator 2. In the simulations with various scenarios and parameters, it is observed that the RTX-LOSSRATE is a suitable policy.

  10. Fault condition stress analysis of NET 16 TF coil model

    International Nuclear Information System (INIS)

    Jong, C.T.J.

    1992-04-01

    As part of the design process of the NET/ITER toroidal field coils (TFCs), the mechanical behaviour of the magnetic system under fault conditions has to be analysed in some detail. Under fault conditions, either electrical or mechanical, the magnetic loading of the coils becomes extreme and further mechanical failure of parts of the overall structure might occur (e.g. failure of the coil, gravitational support, intercoil structure). The mechanical behaviour of the magnetic system under fault conditions has been analysed with a finite element model of the complete TFC system. The analysed fault conditions consist of: a thermal fault, electrical faults and mechanical faults. The mechanical faults have been applied simultaneously with an electrical fault. This report described the work carried out to create the finite element model of 16 TFCs and contains an extensive presentation of the results, obtained with this model, of a normal operating condition analysis and 9 fault condition analyses. Chapter 2-5 contains a detailed description of the finite element model, boundary conditions and loading conditions of the analyses made. Chapters 2-4 can be skipped if the reader is only interested in results. To understand the results presented chapter 6 is recommended, which contains a detailed description of all analysed fault conditions. The dimensions and geometry of the model correspond to the status of the NET/ITER TFC design of May 1990. Compared with previous models of the complete magnetic system, the finite element model of 16 TFCs is 'detailed', and can be used for linear elastic analysis with faulted loads. (author). 8 refs.; 204 figs.; 134 tabs

  11. Vacuum Large Current Parallel Transfer Numerical Analysis

    Directory of Open Access Journals (Sweden)

    Enyuan Dong

    2014-01-01

    Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.

  12. Efficient job handling in the GRID short deadline, interactivity, fault tolerance and parallelism

    CERN Document Server

    Moscicki, Jakub

    2006-01-01

    The major GRID infastructures are designed mainly for batch-oriented computing with coarse-grained jobs and relatively high job turnaround time. However many practical applications in natural and physical sciences may be easily parallelized and run as a set of smaller tasks which require little or no synchronization and which may be scheduled in a more efficient way. The Distributed Analysis Environment Framework (DIANE), is a Master-Worker execution skeleton for applications, which complements the GRID middleware stack. Automatic failure recovery and task dispatching policies enable an easy customization of the behaviour of the framework in a dynamic and non-reliable computing environment. We demonstrate the experience of using the framework with several diverse real-life applications, including Monte Carlo Simulation, Physics Data Analysis and Biotechnology. The interfacing of existing sequential applications from the point of view of non-expert user is made easy, also for legacy applications. We analyze th...

  13. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  14. The timing of fault motion in Death Valley from Illite Age Analysis of fault gouge

    Science.gov (United States)

    Lynch, E. A.; Haines, S. H.; Van der Pluijm, B.

    2014-12-01

    We constrained the timing of fluid circulation and associated fault motion in the Death Valley region of the US Basin and Range Province from Illite Age Analysis (IAA) of fault gouge at seven Low-Angle Normal Fault (LANF) exposures in the Black Mountains and Panamint Mountains, and in two nearby areas. 40Ar/39Ar ages of neoformed, illitic clay minerals in these fault zones range from 2.8 Ma to 18.6 Ma, preserving asynchronous fault motion across the region that corresponds to an evolving history of crustal block movements during Neogene extensional deformation. From north to south, along the western side of the Panamint Range, the Mosaic Canyon fault yields an authigenic illite age of 16.9±2.9 Ma, the Emigrant fault has ages of less than 10-12 Ma at Tucki Mountain and Wildrose Canyon, and an age of 3.6±0.17 Ma was obtained for the Panamint Front Range LANF at South Park Canyon. Across Death Valley, along the western side of the Black Mountains, Ar ages of clay minerals are 3.2±3.9 Ma, 12.2±0.13 Ma and 2.8±0.45 Ma for the Amargosa Detachment, the Gregory Peak Fault and the Mormon Point Turtleback detachment, respectively. Complementary analysis of the δH composition of neoformed clays shows a primarily meteoric source for the mineralizing fluids in these LANF zones. The ages fall into two geologic timespans, reflecting activity pulses in the Middle Miocene and in the Upper Pliocene. Activity on both of the range front LANFs does not appear to be localized on any single portion of these fault systems. Middle Miocene fault rock ages of neoformed clays were also obtained in the Ruby Mountains (10.5±1.2 Ma) to the north of the Death Valley region and to the south in the Whipple Mountains (14.3±0.19 Ma). The presence of similar, bracketed times of activity indicate that LANFs in the Death Valley region were tectonically linked, while isotopic signatures indicate that faulting pulses involved surface fluid penetration.

  15. Failure characteristics analysis and fault diagnosis for liquid rocket engines

    CERN Document Server

    Zhang, Wei

    2016-01-01

    This book concentrates on the subject of health monitoring technology of Liquid Rocket Engine (LRE), including its failure analysis, fault diagnosis and fault prediction. Since no similar issue has been published, the failure pattern and mechanism analysis of the LRE from the system stage are of particular interest to the readers. Furthermore, application cases used to validate the efficacy of the fault diagnosis and prediction methods of the LRE are different from the others. The readers can learn the system stage modeling, analyzing and testing methods of the LRE system as well as corresponding fault diagnosis and prediction methods. This book will benefit researchers and students who are pursuing aerospace technology, fault detection, diagnostics and corresponding applications.

  16. Fault-weighted quantification method of fault detection coverage through fault mode and effect analysis in digital I&C systems

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jaehyun; Lee, Seung Jun, E-mail: sjlee420@unist.ac.kr; Jung, Wondea

    2017-05-15

    Highlights: • We developed the fault-weighted quantification method of fault detection coverage. • The method has been applied to specific digital reactor protection system. • The unavailability of the module had 20-times difference with the traditional method. • Several experimental tests will be effectively prioritized using this method. - Abstract: The one of the most outstanding features of a digital I&C system is the use of a fault-tolerant technique. With an awareness regarding the importance of thequantification of fault detection coverage of fault-tolerant techniques, several researches related to the fault injection method were developed and employed to quantify a fault detection coverage. In the fault injection method, each injected fault has a different importance because the frequency of realization of every injected fault is different. However, there have been no previous studies addressing the importance and weighting factor of each injected fault. In this work, a new method for allocating the weighting to each injected fault using the failure mode and effect analysis data was proposed. For application, the fault-weighted quantification method has also been applied to specific digital reactor protection system to quantify the fault detection coverage. One of the major findings in an application was that we may estimate the unavailability of the specific module in digital I&C systems about 20-times smaller than real value when we use a traditional method. The other finding was that we can also classify the importance of the experimental case. Therefore, this method is expected to not only suggest an accurate quantification procedure of fault-detection coverage by weighting the injected faults, but to also contribute to an effective fault injection experiment by sorting the importance of the failure categories.

  17. Supposed capable fault analysis as supporting data for Nuclear Power Plant in Bojonegara, Banten province

    International Nuclear Information System (INIS)

    Purnomo Raharjo; June Mellawati; Yarianto SBS

    2016-01-01

    Fault location and the regions radius 150 km of a fault line or fault zones was rejected area or at the Nuclear Power Plant site. The objective of this study was to identify the existence of surface fault or supposed capable fault at 150 km from the interest site. Methodology covers interpretation of fault structure, seismic analysis reflection on land and sea, seismotectonic analysis, and determining areas which are free from the surface fault. The regional study area, which has the radius of 150 kilometers from the interest, includes the province of Banten, Jakarta, West Java, And South Sumatra (some part of Lampung). The results of Landsat image interpretation showed fault structure pattern northeast-southwest which represent Cimandiri fault, northwest-southeast represent Citandui fault, Baribis fault, Tangkuban Perahu fault. The northeast - southwest fault is estimated as left lateral faults, and northwest - southeast fault trending is estimated as right lateral faults. Based on the seismic data on land, the fault that rise through to Cisubuh formation are classified as supposed capable fault. Data of seismic stratigraphy sequence analysis at the sea correlated with a unit of the age deposition in the Pleistocene, where divided into Qt (Tertiary boundary and Early Pleistocene), Q1 (Early Pleistocene boundary and Middle Pleistocene) and Q2 (Midle Pleistocene boundary and Late Pleistocene), supposed capable fault pierce early to late Pleistocene sequence. The results of the seismotectonic analysis showed that there are capable fault which is estimated as supposed capable fault. (author)

  18. Utilization of fault tree analysis techniques in fire protection work

    International Nuclear Information System (INIS)

    Crass, E.R.

    1986-01-01

    This paper describes the development of a fault tree model for a typical pressurized water reactor (PWR), and the subsequent use of this model to perform a safe shutdown analysis and determine conformance with Section IIIG of 10 CFR 50, Appendix R. The paper describes the rationale for choosing this analytical tool, the development of the fault tree model, the analysis of the model using the PREP code, disposition of the results, and finally, application of the results to determine the need for plant modifications. It concludes with a review of the strengths and weaknesses of the use of Fault Tree Methodology for this application

  19. Electrical Steering of Vehicles - Fault-tolerant Analysis and Design

    DEFF Research Database (Denmark)

    Blanke, Mogens; Thomsen, Jesper Sandberg

    2006-01-01

    solutions and still meet strict requirements to functional safety. The paper applies graph-based analysis of functional system structure to find a novel fault-tolerant architecture for an electrical steering where a dedicated AC-motor design and cheap voltage measurements ensure ability to detect all......The topic of this paper is systems that need be designed such that no single fault can cause failure at the overall level. A methodology is presented for analysis and design of fault-tolerant architectures, where diagnosis and autonomous reconfiguration can replace high cost triple redundancy...

  20. Analysis of large fault trees based on functional decomposition

    International Nuclear Information System (INIS)

    Contini, Sergio; Matuzas, Vaidas

    2011-01-01

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  1. Analysis of large fault trees based on functional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)

    2011-03-15

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  2. Generalized fault tree analysis combined with state analysis

    International Nuclear Information System (INIS)

    Caldarola, L.

    1980-02-01

    An analytical theory has been developed which allows one to calculate the occurrence probability of the top event of a fault tree with multistate (two or more than two states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of boolean algebra is required. This is called 'boolean algebra with restrictions on variables' and its basic rules are the same as those of the traditional boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. It is also shown that the boolean algebra with restrictions on variables facilitates the task of formally combining fault tree analysis with state analysis. The computer program MUSTAFA 1 based on the above theory has been developed. It can analyse fault trees of system containing statistically independent as well as dependent components with two or more than two states. MUSTAFA 1 can handle coherent as well as non coherent boolean functions. (orig.) 891 HP/orig. 892 MB [de

  3. Fault Correspondence Analysis in Complex Electric Power Systems

    Directory of Open Access Journals (Sweden)

    WANG, C.

    2015-02-01

    Full Text Available Wide area measurement system (WAMS mainly serves for the requirement of time synchronization in complex electric power systems. The analysis and control of power system mostly depends on the measurement of state variables, and WAMS provides the basis for dynamic monitoring of power system by these measurements, which can also satisfy the demands of observable, controllable, real-time analysis and decision, self-adaptive etc. requested by smart grid. In this paper, based on the principles of fault correspondence analysis, by calculating row characteristic which represents nodal electrical information and column characteristic which represents acquisition time information, we will conduct intensive research on fault detection. The research results indicate that the fault location is determined by the first dimensional variable, and the occurrence time of fault is determined by the second dimensional variable. The research in this paper will contribute to the development of future smart grid.

  4. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  5. Linear stability analysis of heated parallel channels

    International Nuclear Information System (INIS)

    Nourbakhsh, H.P.; Isbin, H.S.

    1982-01-01

    An analyis is presented of thermal hydraulic stability of flow in parallel channels covering the range from inlet subcooling to exit superheat. The model is based on a one-dimensional drift velocity formulation of the two phase flow conservation equations. The system of equations is linearized by assuming small disturbances about the steady state. The dynamic response of the system to an inlet flow perturbation is derived yielding the characteristic equation which predicts the onset of instabilities. A specific application is carried out for homogeneous and regional uniformly heated systems. The particular case of equal characteristic frequencies of two-phase and single phase vapor region is studied in detail. The D-partition method and the Mikhailov stability criterion are used for determining the marginal stability boundary. Stability predictions from the present analysis are compared with the experimental data from the solar test facility. 8 references

  6. DEA Sensitivity Analysis for Parallel Production Systems

    Directory of Open Access Journals (Sweden)

    J. Gerami

    2011-06-01

    Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.

  7. Using Order Tracking Analysis Method to Detect the Angle Faults of Blades on Wind Turbine

    DEFF Research Database (Denmark)

    Li, Pengfei; Hu, Weihao; Liu, Juncheng

    2016-01-01

    The angle faults of blades on wind turbines are usually included in the set angle fault and the pitch angle fault. They are occupied with a high proportion in all wind turbine faults. Compare with the traditional fault detection methods, using order tracking analysis method to detect angle faults....... By analyzing and reconstructing the fault signals, it is easy to detect the fault characteristic frequency and see the characteristic frequencies of angle faults depend on the shaft rotating frequency, which is known as the 1P frequency and 3P frequency distinctly....

  8. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  9. Hydraulic Fracture Induced Seismicity During A Multi-Stage Pad Completion in Western Canada: Evidence of Activation of Multiple, Parallel Faults

    Science.gov (United States)

    Maxwell, S.; Garrett, D.; Huang, J.; Usher, P.; Mamer, P.

    2017-12-01

    Following reports of injection induced seismicity in the Western Canadian Sedimentary Basin, regulators have imposed seismic monitoring and traffic light protocols for fracturing operations in specific areas. Here we describe a case study in one of these reservoirs, the Montney Shale in NE British Columbia, where induced seismicity was monitored with a local array during multi-stage hydraulic fracture stimulations on several wells from a single drilling pad. Seismicity primarily occurred during the injection time periods, and correlated with periods of high injection rates and wellhead pressures above fracturing pressures. Sequential hydraulic fracture stages were found to progressively activate several parallel, critically-stressed faults, as illuminated by multiple linear hypocenter patterns in the range between Mw 1 and 3. Moment tensor inversion of larger events indicated a double-couple mechanism consistent with the regional strike-slip stress state and the hypocenter lineations. The critically-stressed faults obliquely cross the well paths which were purposely drilled parallel to the minimum principal stress direction. Seismicity on specific faults started and stopped when fracture initiation points of individual injection stages were proximal to the intersection of the fault and well. The distance ranges when the seismicity occurs is consistent with expected hydraulic fracture dimensions, suggesting that the induced fault slip only occurs when a hydraulic fracture grows directly into the fault and the faults are temporarily exposed to significantly elevated fracture pressures during the injection. Some faults crossed multiple wells and the seismicity was found to restart during injection of proximal stages on adjacent wells, progressively expanding the seismogenic zone of the fault. Progressive fault slip is therefore inferred from the seismicity migrating further along the faults during successive injection stages. An accelerometer was also deployed close

  10. PV System Component Fault and Failure Compilation and Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    2018-02-01

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  11. A fault tree analysis strategy using binary decision diagrams

    International Nuclear Information System (INIS)

    Reay, Karen A.; Andrews, John D.

    2002-01-01

    The use of binary decision diagrams (BDDs) in fault tree analysis provides both an accurate and efficient means of analysing a system. There is a problem, however, with the conversion process of the fault tree to the BDD. The variable ordering scheme chosen for the construction of the BDD has a crucial effect on its resulting size and previous research has failed to identify any scheme that is capable of producing BDDs for all fault trees. This paper proposes an analysis strategy aimed at increasing the likelihood of obtaining a BDD for any given fault tree, by ensuring the associated calculations are as efficient as possible. The method implements simplification techniques, which are applied to the fault tree to obtain a set of 'minimal' subtrees, equivalent to the original fault tree structure. BDDs are constructed for each, using ordering schemes most suited to their particular characteristics. Quantitative analysis is performed simultaneously on the set of BDDs to obtain the top event probability, the system unconditional failure intensity and the criticality of the basic events

  12. A compendium of computer codes in fault tree analysis

    International Nuclear Information System (INIS)

    Lydell, B.

    1981-03-01

    In the past ten years principles and methods for a unified system reliability and safety analysis have been developed. Fault tree techniques serve as a central feature of unified system analysis, and there exists a specific discipline within system reliability concerned with the theoretical aspects of fault tree evaluation. Ever since the fault tree concept was established, computer codes have been developed for qualitative and quantitative analyses. In particular the presentation of the kinetic tree theory and the PREP-KITT code package has influenced the present use of fault trees and the development of new computer codes. This report is a compilation of some of the better known fault tree codes in use in system reliability. Numerous codes are available and new codes are continuously being developed. The report is designed to address the specific characteristics of each code listed. A review of the theoretical aspects of fault tree evaluation is presented in an introductory chapter, the purpose of which is to give a framework for the validity of the different codes. (Auth.)

  13. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2018-03-01

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  14. Back analysis of fault-slip in burst prone environment

    Science.gov (United States)

    Sainoki, Atsushi; Mitri, Hani S.

    2016-11-01

    In deep underground mines, stress re-distribution induced by mining activities could cause fault-slip. Seismic waves arising from fault-slip occasionally induce rock ejection when hitting the boundary of mine openings, and as a result, severe damage could be inflicted. In general, it is difficult to estimate fault-slip-induced ground motion in the vicinity of mine openings because of the complexity of the dynamic response of faults and the presence of geological structures. In this paper, a case study is conducted for a Canadian underground mine, herein called "Mine-A", which is known for its seismic activities. Using a microseismic database collected from the mine, a back analysis of fault-slip is carried out with mine-wide 3-dimensional numerical modeling. A back analysis is conducted to estimate the physical and mechanical properties of the causative fracture or shear zones. One large seismic event has been selected for the back analysis to detect a fault-slip related seismic event. In the back analysis, the shear zone properties are estimated with respect to moment magnitude of the seismic event and peak particle velocity (PPV) recorded by a strong ground motion sensor. The estimated properties are then validated through comparison with peak ground acceleration recorded by accelerometers. Lastly, ground motion in active mining areas is estimated by conducting dynamic analysis with the estimated values. The present study implies that it would be possible to estimate the magnitude of seismic events that might occur in the near future by applying the estimated properties to the numerical model. Although the case study is conducted for a specific mine, the developed methodology can be equally applied to other mines suffering from fault-slip related seismic events.

  15. Fault tree technique: advances in probabilistic and logical analysis

    International Nuclear Information System (INIS)

    Clarotti, C.A.; Amendola, A.; Contini, S.; Squellati, G.

    1982-01-01

    Fault tree reliability analysis is used for assessing the risk associated to systems of increasing complexity (phased mission systems, systems with multistate components, systems with non-monotonic structure functions). Much care must be taken to make sure that fault tree technique is not used beyond its correct validity range. To this end a critical review of mathematical foundations of reliability fault tree analysis is carried out. Limitations are enlightened and potential solutions to open problems are suggested. Moreover an overview is given on the most recent developments in the implementation of an integrated software (SALP-MP, SALP-NOT, SALP-CAFT Codes) for the analysis of a wide class of systems

  16. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  17. [The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].

    Science.gov (United States)

    Liu, Hongbin

    2015-11-01

    In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.

  18. Research on fault diagnosis for RCP rotor based on wavelet analysis

    International Nuclear Information System (INIS)

    Chen Zhihui; Xia Hong; Wang Taotao

    2008-01-01

    Wavelet analysis is with the characteristics of noise reduction and multiscale resolution, and can be used to effectively extract the fault features of the typical failures of the main pumps. Simulink is used to simulate the typical faults: Misalignment Fault, Crackle Fault of rotor, and Initial Bending Fault, then the Wavelet method is used to analyze the vibration signal. The result shows that the extracted fault feature from wavelet analysis can effectively identify the fault signals. The Wavelet analysis is a practical method for the diagnosis of main coolant pump failure, and is with certain value for application and significance. (authors)

  19. Fault tree analysis. Implementation of the WAM-codes

    International Nuclear Information System (INIS)

    Bento, J.P.; Poern, K.

    1979-07-01

    The report describes work going on at Studsvik at the implementation of the WAM code package for fault tree analysis. These codes originally developed under EPRI contract by Sciences Applications Inc, allow, in contrast with other fault tree codes, all Boolean operations, thus allowing modeling of ''NOT'' conditions and dependent components. To concretize the implementation of these codes, the auxiliary feed-water system of the Swedish BWR Oskarshamn 2 was chosen for the reliability analysis. For this system, both the mean unavailability and the probability density function of the top event - undesired event - of the system fault tree were calculated, the latter using a Monte-Carlo simulation technique. The present study is the first part of a work performed under contract with the Swedish Nuclear Power Inspectorate. (author)

  20. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  1. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    Science.gov (United States)

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  2. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT for Aquaculture

    Directory of Open Access Journals (Sweden)

    Yingyi Chen

    2017-01-01

    Full Text Available In the Internet of Things (IoT equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  3. Passive and partially active fault tolerance for massively parallel stream processing engines

    DEFF Research Database (Denmark)

    Su, Li; Zhou, Yongluan

    2018-01-01

    . On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...... also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness...

  4. Analysis of a parallel multigrid algorithm

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  5. Fault Analysis of ITER Coil Power Supply System

    International Nuclear Information System (INIS)

    Song, In Ho; Jun, Tao; Benfatto, Ivone

    2009-01-01

    The ITER magnet coils are all designed using superconductors with high current carrying capability. The Toroidal Field (TF) coils operate in a steadystate mode with a current of 68 kA and discharge the stored energy in case of quench with using 9 interleaved Fast Discharge Units (FDUs). The Central Solenoid (CS) coils and Poloidal Field (PF) coils operate in a pulse mode with currents of up to 45 kA and require fast variation of currents inducing more than 10 kV during normal operation on the coil terminals using Switching Network (SN) systems (CSs, PF1 and 6) and Booster and VS converters (PF2 to 5), which are series connected to Main converters. SN and FDU systems comprise high current DC circuit breakers and resistors for generating high voltage (SN) and to dissipate magnetic energy (FDUs). High transient voltages can arise due to the switching operation of SN and FD and the characteristics of resistors and stray components of DC distribution systems. Also, faults in power supply control such as shorts or grounding faults can produce higher voltages between terminals and between terminal and ground. Therefore, the design of the coil insulation, coil terminal regions, feeders, feed throughs, pipe breaks and instrumentation must take account of these high voltages during normal and abnormal conditions. Voltage insulation level can be defined and it is necessary to test the coils at higher voltages, to be sure of reliable performance during the lifetime of operation. This paper describes the fault analysis of the TF, CS and PF coil power supply systems, taking account of the stray parameter of the power supply and switching systems and inductively coupled superconducting coil models. Resistor grounding systems are included in the simulation model and all fault conditions such as converter hardware and software faults, switching system hardware and software faults, DC short circuits and single grounding faults are simulated. The occurrence of two successive faults

  6. Multi-Level Simulated Fault Injection for Data Dependent Reliability Analysis of RTL Circuit Descriptions

    Directory of Open Access Journals (Sweden)

    NIMARA, S.

    2016-02-01

    Full Text Available This paper proposes data-dependent reliability evaluation methodology for digital systems described at Register Transfer Level (RTL. It uses a hybrid hierarchical approach, combining the accuracy provided by Gate Level (GL Simulated Fault Injection (SFI and the low simulation overhead required by RTL fault injection. The methodology comprises the following steps: the correct simulation of the RTL system, according to a set of input vectors, hierarchical decomposition of the system into basic RTL blocks, logic synthesis of basic RTL blocks, data-dependent SFI for the GL netlists, and RTL SFI. The proposed methodology has been validated in terms of accuracy on a medium sized circuit – the parallel comparator used in Check Node Unit (CNU of the Low-Density Parity-Check (LDPC decoders. The methodology has been applied for the reliability analysis of a 128-bit Advanced Encryption Standard (AES crypto-core, for which the GL simulation was prohibitive in terms of required computational resources.

  7. An interactive parallel processor for data analysis

    International Nuclear Information System (INIS)

    Mong, J.; Logan, D.; Maples, C.; Rathbun, W.; Weaver, D.

    1984-01-01

    A parallel array of eight minicomputers has been assembled in an attempt to deal with kiloparameter data events. By exporting computer system functions to a separate processor, the authors have been able to achieve computer amplification linearly proportional to the number of executing processors

  8. Measurement and analysis of operating system fault tolerance

    Science.gov (United States)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  9. GOTRES: an expert system for fault detection and analysis

    International Nuclear Information System (INIS)

    Chung, D.T.; Modarres, M.

    1989-01-01

    This paper describes a deep-knowledge expert system shell for diagnosing faults in process operations. The expert program shell is called GOTRES (GOal TRee Expert System) and uses a goal tree-success tree deep-knowledge structure to model its knowledge-base. To demonstrate GOTRES, we have built an on-line fault diagnosis expert system for an experimental nuclear reactor facility using this shell. The expert system is capable of diagnosing fault conditions using system goal tree as well as utilizing accumulated operating knowledge to predict plant causal and temporal behaviours. The GOTRES shell has also been used for root-cause detection and analysis in a nuclear plant. (author)

  10. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection

    DEFF Research Database (Denmark)

    Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet

    2016-01-01

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...

  11. Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance

    Science.gov (United States)

    Wang, Jian; Yang, Zhenwei; Kang, Mei

    2018-01-01

    This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.

  12. Discrete Hadamard transformation algorithm's parallelism analysis and achievement

    Science.gov (United States)

    Hu, Hui

    2009-07-01

    With respect to Discrete Hadamard Transformation (DHT) wide application in real-time signal processing while limitation in operation speed of DSP. The article makes DHT parallel research and its parallel performance analysis. Based on multiprocessor platform-TMS320C80 programming structure, the research is carried out to achieve two kinds of parallel DHT algorithms. Several experiments demonstrated the effectiveness of the proposed algorithms.

  13. Asymmetrical Fault Analysis at the Offshore Network of HVDC connected Wind Power Plants

    DEFF Research Database (Denmark)

    Goksu, Omer; Cutululis, Nicolaos Antonio; Sorensen, Poul

    2017-01-01

    Short-circuit faults for HVDC connected Wind Power Plants (WPPs) have been studied mostly for dc link and onshore ac grid faults, while the offshore ac faults, especially asymmetrical faults, have been mostly omitted in the literature. Requirements related to the offshore asymmetrical faults have...... been kept as future development at national levels in the recent ENTSO-E HVDC network code. In this paper offshore ac faults are studied using the classical power system fault analysis methods. It is shown that suppression of negative sequence current flow is not applicable and negative sequence...

  14. Kinematic Analysis of Fault-Slip Data in the Central Range of Papua, Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2016-01-01

    Full Text Available DOI:10.17014/ijog.3.1.1-16Most of the Cenozoic tectonic evolution in New Guinea is a result of obliquely convergent motion that ledto an arc-continent collision between the Australian and Pacific Plates. The Gunung Bijih (Ertsberg Mining District(GBMD is located in the Central Range of Papua, in the western half of the island of New Guinea. This study presentsthe results of detailed structural mapping concentrated on analyzing fault-slip data along a 15-km traverse of theHeavy Equipment Access Trail (HEAT and the Grasberg mine access road, providing new information concerning thedeformation in the GBMD and the Cenozoic structural evolution of the Central Range. Structural analysis indicatesthat two distinct stages of deformation have occurred since ~12 Ma. The first stage generated a series of en-echelonNW-trending (π-fold axis = 300° folds and a few reverse faults. The second stage resulted in a significant left-lateralstrike-slip faulting sub-parallel to the regional strike of upturned bedding. Kinematic analysis reveals that the areasbetween the major strike-slip faults form structural domains that are remarkably uniform in character. The changein deformation styles from contractional to a strike-slip offset is explained as a result from a change in the relativeplate motion between the Pacific and Australian Plates at ~4 Ma. From ~4 - 2 Ma, transform motion along an ~ 270°trend caused a left-lateral strike-slip offset, and reactivated portions of pre-existing reverse faults. This action had aprofound effect on magma emplacement and hydrothermal activity.

  15. SETS, Boolean Manipulation for Network Analysis and Fault Tree Analysis

    International Nuclear Information System (INIS)

    Worrell, R.B.

    1985-01-01

    Description of problem or function - SETS is used for symbolic manipulation of set (or Boolean) equations, particularly the reduction of set equations by the application of set identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze non-coherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access through nullification of sensors in its protection system. 4. Method of solution - The SETS program is used to read, interpret, and execute the statements of a SETS user program which is an algorithm that specifies the particular manipulations to be performed and the order in which they are to occur. 5. Restrictions on the complexity of the problem - Any properly formed set equation involving the set operations of union, intersection, and complement is acceptable for processing by the SETS program. Restrictions on the size of a set equation that can be processed are not absolute but rather are related to the number of terms in the disjunctive normal form of the equation, the number of literals in the equation, etc. Nevertheless, set equations involving thousands and even hundreds of thousands of terms can be processed successfully

  16. A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    Science.gov (United States)

    Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

  17. Fault trees for decision making in systems analysis

    International Nuclear Information System (INIS)

    Lambert, H.E.

    1975-01-01

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut sets according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure. (80 figures, 20 tables)

  18. Effect Analysis of Faults in Digital I and C Systems of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Jun; Jung, Won Dea [KAERI, Dajeon (Korea, Republic of); Kim, Man Cheol [Chung-Ang University, Seoul (Korea, Republic of)

    2014-08-15

    A reliability analysis of digital instrumentation and control (I and C) systems in nuclear power plants has been introduced as one of the important elements of a probabilistic safety assessment because of the unique characteristics of digital I and C systems. Digital I and C systems have various features distinguishable from those of analog I and C systems such as software and fault-tolerant techniques. In this work, the faults in a digital I and C system were analyzed and a model for representing the effects of the faults was developed. First, the effects of the faults in a system were analyzed using fault injection experiments. A software-implemented fault injection technique in which faults can be injected into the memory was used based on the assumption that all faults in a system are reflected in the faults in the memory. In the experiments, the effect of a fault on the system output was observed. In addition, the success or failure in detecting the fault by fault-tolerant functions included in the system was identified. Second, a fault tree model for representing that a fault is propagated to the system output was developed. With the model, it can be identified how a fault is propagated to the output or why a fault is not detected by fault-tolerant techniques. Based on the analysis results of the proposed method, it is possible to not only evaluate the system reliability but also identify weak points of fault-tolerant techniques by identifying undetected faults. The results can be reflected in the designs to improve the capability of fault-tolerant techniques.

  19. Effect analysis of faults in digital I and C systems of nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Jun

    2014-01-01

    A reliability analysis of digital instrumentation and control (I and C) systems in nuclear power plants has been introduced as one of the important elements of a probabilistic safety assessment because of the unique characteristics of digital I and C systems. Digital I and C systems have various features distinguishable from those of analog I and C systems such as software and fault-tolerant techniques. In this work, the faults in a digital I and C system were analyzed and a model for representing the effects of the faults was developed. First, the effects of the faults in a system were analyzed using fault injection experiments. A software-implemented fault injection technique in which faults can be injected into the memory was used based on the assumption that all faults in a system are reflected in the faults in the memory. In the experiments, the effect of a fault on the system output was observed. In addition, the success or failure in detecting the fault by fault-tolerant functions included in the system was identified. Second, a fault tree model for representing that a fault is propagated to the system output was developed. With the model, it can be identified how a fault is propagated to the output or why a fault is not detected by fault-tolerant techniques. Based on the analysis results of the proposed method, it is possible to not only evaluate the system reliability but also identify weak points of fault-tolerant techniques by identifying undetected faults. The results can be reflected in the designs to improve the capability of fault-tolerant techniques. (author)

  20. Methods for Fault Diagnosability Analysis of a Class of Affine Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Xiafu Peng

    2015-01-01

    Full Text Available The fault diagnosability analysis for a given model, before developing a diagnosis algorithm, can be used to answer questions like “can the fault fi be detected by observed states?” and “can it separate fault fi from fault fj by observed states?” If not, we should redesign the sensor placement. This paper deals with the problem of the evaluation of detectability and separability for the diagnosability analysis of affine nonlinear system. First, we used differential geometry theory to analyze the nonlinear system and proposed new detectability criterion and separability criterion. Second, the related matrix between the faults and outputs of the system and the fault separable matrix are designed for quantitative fault diagnosability calculation and fault separability calculation, respectively. Finally, we illustrate our approach to exemplify how to analyze diagnosability by a certain nonlinear system example, and the experiment results indicate the effectiveness of the fault evaluation methods.

  1. Structural analysis of cataclastic rock of active fault damage zones: An example from Nojima and Arima-Takatsuki fault zones (SW Japan)

    Science.gov (United States)

    Satsukawa, T.; Lin, A.

    2016-12-01

    Most of the large intraplate earthquakes which occur as slip on mature active faults induce serious damages, in spite of their relatively small magnitudes comparing to subduction-zone earthquakes. After 1995 Kobe Mw7.2 earthquake, a number of studies have been done to understand the structure, physical properties and dynamic phenomenon of active faults. However, the deformation mechanics and related earthquake generating mechanism in the intraplate active fault zone are still poorly understood. The detailed, multi-scalar structural analysis of faults and of fault rocks has to be the starting point for reconstructing the complex framework of brittle deformation. Here, we present two examples of active fault damage zones: Nojima fault and Arima-Takatsuki active fault zone in the southwest Japan. We perform field investigations, combined with meso-and micro-structural analyses of fault-related rocks, which provide the important information in reconstructing the long-term seismic faulting behavior and tectonic environment. Our study shows that in both sites, damage zone is observed in over 10m, which is composed by the host rocks, foliated and non-foliated cataclasites, fault gouge and fault breccia. The slickenside striations in Asano fault, the splay fault of Nojima fault, indicate a dextral movement sense with some normal components. Whereas, those of Arima-Takatsuki active fault shows a dextral strike-slip fault with minor vertical component. Fault gouges consist of brown-gray matrix of fine grains and composed by several layers from few millimeters to a few decimeters. It implies that slip is repeated during millions of years, as the high concentration and physical interconnectivity of fine-grained minerals in brittle fault rocks produce the fault's intrinsic weakness in the crust. Therefore, faults rarely express only on single, discrete deformation episode, but are the cumulative result of several superimposed slip events.

  2. System optimization by fault tree analysis

    International Nuclear Information System (INIS)

    Krieger, G.

    1985-01-01

    Reliability evaluation are performed during design phasis as well as during erection phasis. Sensitivity analysis are performed to evaluate the balance of system. A suitable representation allows cost and related effect to be directly determined. Thus there is an advantage for decision making where as qualitative evaluations do not give so much insight. (orig.) [de

  3. Investigation of faulted tunnel models by combined photoelasticity and finite element analysis

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Huang, Yuping

    1994-01-01

    Models of square and circular tunnels with short faults cutting through their surfaces are investigated by photoelasticity. These models, when duplicated by finite element analysis can predict the stress states of square or circular faulted tunnels adequately. Finite element analysis, using gap elements, may be used to investigate full size faulted tunnel system

  4. HVAC fault tree analysis for WIPP integrated risk assessment

    International Nuclear Information System (INIS)

    Kirby, P.; Iacovino, J.

    1990-01-01

    In order to evaluate the public health risk from operation of the Waste Isolation Pilot Plant (WIPP) due to potential radioactive releases, a probabilistic risk assessment of waste handling operations was conducted. One major aspect of this risk assessment involved fault tree analysis of the plant heating, ventilation, and air conditioning (HVAC) systems, which comprise the final barrier between waste handling operations and the environment. 1 refs., 1 tab

  5. A nonlinear least-squares inverse analysis of strike-slip faulting with application to the San Andreas fault

    Science.gov (United States)

    Williams, Charles A.; Richardson, Randall M.

    1988-01-01

    A nonlinear weighted least-squares analysis was performed for a synthetic elastic layer over a viscoelastic half-space model of strike-slip faulting. Also, an inversion of strain rate data was attempted for the locked portions of the San Andreas fault in California. Based on an eigenvector analysis of synthetic data, it is found that the only parameter which can be resolved is the average shear modulus of the elastic layer and viscoelastic half-space. The other parameters were obtained by performing a suite of inversions for the fault. The inversions on data from the northern San Andreas resulted in predicted parameter ranges similar to those produced by inversions on data from the whole fault.

  6. Fault Diagnosis for Electrical Distribution Systems using Structural Analysis

    DEFF Research Database (Denmark)

    Knüppel, Thyge; Blanke, Mogens; Østergaard, Jacob

    2014-01-01

    redundancies in large sets of equations only from the structure (topology) of the equations. A salient feature is automated generation of redundancy relations. The method is indeed feasible in electrical networks where circuit theory and network topology together formulate the constraints that define...... relations (ARR) are likely to change. The algorithms used for diagnosis may need to change accordingly, and finding efficient methods to ARR generation is essential to employ fault-tolerant methods in the grid. Structural analysis (SA) is based on graph-theoretical results, that offer to find analytic...... a structure graph. This paper shows how three-phase networks are modelled and analysed using structural methods, and it extends earlier results by showing how physical faults can be identified such that adequate remedial actions can be taken. The paper illustrates a feasible modelling technique for structural...

  7. Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro

    2015-01-01

    Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis

  8. SIFT - Design and analysis of a fault-tolerant computer for aircraft control. [Software Implemented Fault Tolerant systems

    Science.gov (United States)

    Wensley, J. H.; Lamport, L.; Goldberg, J.; Green, M. W.; Levitt, K. N.; Melliar-Smith, P. M.; Shostak, R. E.; Weinstock, C. B.

    1978-01-01

    SIFT (Software Implemented Fault Tolerance) is an ultrareliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units. The main processing units are off-the-shelf minicomputers, with standard microcomputers serving as the interface to the I/O system. Fault isolation is achieved by using a specially designed redundant bus system to interconnect the processing units. Error detection and analysis and system reconfiguration are performed by software. Iterative tasks are redundantly executed, and the results of each iteration are voted upon before being used. Thus, any single failure in a processing unit or bus can be tolerated with triplication of tasks, and subsequent failures can be tolerated after reconfiguration. Independent execution by separate processors means that the processors need only be loosely synchronized, and a novel fault-tolerant synchronization method is described.

  9. Parallelization for X-ray crystal structural analysis program

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Minami, Masayuki; Yamamoto, Akiji

    1997-10-01

    In this report we study vectorization and parallelization for X-ray crystal structural analysis program. The target machine is NEC SX-4 which is a distributed/shared memory type vector parallel supercomputer. X-ray crystal structural analysis is surveyed, and a new multi-dimensional discrete Fourier transform method is proposed. The new method is designed to have a very long vector length, so that it enables to obtain the 12.0 times higher performance result that the original code. Besides the above-mentioned vectorization, the parallelization by micro-task functions on SX-4 reaches 13.7 times acceleration in the part of multi-dimensional discrete Fourier transform with 14 CPUs, and 3.0 times acceleration in the whole program. Totally 35.9 times acceleration to the original 1CPU scalar version is achieved with vectorization and parallelization on SX-4. (author)

  10. Transient pattern analysis for fault detection and diagnosis of HVAC systems

    International Nuclear Information System (INIS)

    Cho, Sung-Hwan; Yang, Hoon-Cheol; Zaheer-uddin, M.; Ahn, Byung-Cheon

    2005-01-01

    Modern building HVAC systems are complex and consist of a large number of interconnected sub-systems and components. In the event of a fault, it becomes very difficult for the operator to locate and isolate the faulty component in such large systems using conventional fault detection methods. In this study, transient pattern analysis is explored as a tool for fault detection and diagnosis of an HVAC system. Several tests involving different fault replications were conducted in an environmental chamber test facility. The results show that the evolution of fault residuals forms clear and distinct patterns that can be used to isolate faults. It was found that the time needed to reach steady state for a typical building HVAC system is at least 50-60 min. This means incorrect diagnosis of faults can happen during online monitoring if the transient pattern responses are not considered in the fault detection and diagnosis analysis

  11. Event analysis using a massively parallel processor

    International Nuclear Information System (INIS)

    Bale, A.; Gerelle, E.; Messersmith, J.; Warren, R.; Hoek, J.

    1990-01-01

    This paper describes a system for performing histogramming of n-tuple data at interactive rates using a commercial SIMD processor array connected to a work-station running the well-known Physics Analysis Workstation software (PAW). Results indicate that an order of magnitude performance improvement over current RISC technology is easily achievable

  12. TH-EF-BRC-03: Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Thomadsen, B. [University of Wisconsin (United States)

    2016-06-15

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 different clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.

  13. TH-EF-BRC-03: Fault Tree Analysis

    International Nuclear Information System (INIS)

    Thomadsen, B.

    2016-01-01

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 different clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.

  14. The integration methods of fuzzy fault mode and effect analysis and fault tree analysis for risk analysis of yogurt production

    Science.gov (United States)

    Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita

    2017-05-01

    Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.

  15. A supercomputer for parallel data analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    The project of a powerful multiprocessor system is proposed. The main purpose of the project is to develop a low cost computer system with a processing rate of a few tens of millions of operations per second. The system solves many problems of data analysis from high-energy physics spectrometers. It includes about 70 MOTOROLA-68020 based powerful slave microprocessor boards liaisoned through the VME crates to a host VAX micro computer. Each single microprocessor board performs the same algorithm requiring large computing time. The host computer distributes data over the microprocessor board, collects and combines obtained results. The architecture of the system easily allows one to use it in the real time mode

  16. Development of fault diagnostic technique using reactor noise analysis

    International Nuclear Information System (INIS)

    Park, Jin Ho; Kim, J. S.; Oh, I. S.; Ryu, J. S.; Joo, Y. S.; Choi, S.; Yoon, D. B.

    1999-04-01

    The ultimate goal of this project is to establish the analysis technique to diagnose the integrity of reactor internals using reactor noise. The reactor noise analyses techniques for the PWR and CANDU NPP(Nuclear Power Plants) were established by which the dynamic characteristics of reactor internals and SPND instrumentations could be identified, and the noise database corresponding to each plant(both Korean and foreign one) was constructed and compared. Also the change of dynamic characteristics of the Ulchin 1 and 2 reactor internals were simulated under presumed fault conditions. Additionally portable reactor noise analysis system was developed so that real time noise analysis could directly be able to be performed at plant site. The reactor noise analyses techniques developed and the database obtained from the fault simulation, can be used to establish a knowledge based expert system to diagnose the NPP's abnormal conditions. And the portable reactor noise analysis system may be utilized as a substitute for plant IVMS(Internal Vibration Monitoring System). (author)

  17. Development of fault diagnostic technique using reactor noise analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin Ho; Kim, J. S.; Oh, I. S.; Ryu, J. S.; Joo, Y. S.; Choi, S.; Yoon, D. B

    1999-04-01

    The ultimate goal of this project is to establish the analysis technique to diagnose the integrity of reactor internals using reactor noise. The reactor noise analyses techniques for the PWR and CANDU NPP(Nuclear Power Plants) were established by which the dynamic characteristics of reactor internals and SPND instrumentations could be identified, and the noise database corresponding to each plant(both Korean and foreign one) was constructed and compared. Also the change of dynamic characteristics of the Ulchin 1 and 2 reactor internals were simulated under presumed fault conditions. Additionally portable reactor noise analysis system was developed so that real time noise analysis could directly be able to be performed at plant site. The reactor noise analyses techniques developed and the database obtained from the fault simulation, can be used to establish a knowledge based expert system to diagnose the NPP's abnormal conditions. And the portable reactor noise analysis system may be utilized as a substitute for plant IVMS(Internal Vibration Monitoring System). (author)

  18. Reliability Analysis of Operation for Cableways by FTA (Fault Tree Analysis Method

    Directory of Open Access Journals (Sweden)

    Sergej Težak

    2010-05-01

    Full Text Available This paper examines the reliability of the operation of cableway systems in Slovenia, which has major impact on the quality of service in the mountain tourism, mainly in wintertime. Different types of cableway installations in Slovenia were captured in a sample and fault tree analysis (FTA was made on the basis of the obtained data. The paper presents the results of the analysis. With these results it is possible to determine the probability of faults of different types of cableways, which types of faults have the greatest impact on the termination of operation, which components of cableways fail most, what is the impact of age of cableways on the occurrence of the faults. Finally, an attempt was made to find if occurrence of faults on individual cableway installation has also impact on traffic on this cableway due to reduced quality of service. KEYWORDS: cableways, aerial ropeways, chairlifts, ski-tows, quality, faults, fault tree analysis, reliability, service quality, winter tourism, mountain tourist centre

  19. Design and Transmission Analysis of an Asymmetrical Spherical Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Caro, Stéphane; Wang, Jiawei

    2015-01-01

    analysis and optimal design of the proposed manipulator based on its kinematic analysis. The input and output transmission indices of the manipulator are defined for its optimum design based on the virtual coefficient between the transmission wrenches and twist screws. The sets of optimal parameters......This paper presents an asymmetrical spherical parallel manipulator and its transmissibility analysis. This manipulator contains a center shaft to both generate a decoupled unlimited-torsion motion and support the mobile platform for high positioning accuracy. This work addresses the transmission...... are identified and the distribution of the transmission index is visualized. Moreover, a comparative study regarding to the performances with the symmetrical spherical parallel manipulators is conducted and the comparison shows the advantages of the proposed manipulator with respect to its spherical parallel...

  20. Condition-based fault tree analysis (CBFTA): A new method for improved fault tree analysis (FTA), reliability and safety calculations

    International Nuclear Information System (INIS)

    Shalev, Dan M.; Tiran, Joseph

    2007-01-01

    Condition-based maintenance methods have changed systems reliability in general and individual systems in particular. Yet, this change does not affect system reliability analysis. System fault tree analysis (FTA) is performed during the design phase. It uses components failure rates derived from available sources as handbooks, etc. Condition-based fault tree analysis (CBFTA) starts with the known FTA. Condition monitoring (CM) methods applied to systems (e.g. vibration analysis, oil analysis, electric current analysis, bearing CM, electric motor CM, and so forth) are used to determine updated failure rate values of sensitive components. The CBFTA method accepts updated failure rates and applies them to the FTA. The CBFTA recalculates periodically the top event (TE) failure rate (λ TE ) thus determining the probability of system failure and the probability of successful system operation-i.e. the system's reliability. FTA is a tool for enhancing system reliability during the design stages. But, it has disadvantages, mainly it does not relate to a specific system undergoing maintenance. CBFTA is tool for updating reliability values of a specific system and for calculating the residual life according to the system's monitored conditions. Using CBFTA, the original FTA is ameliorated to a practical tool for use during the system's field life phase, not just during system design phase. This paper describes the CBFTA method and its advantages are demonstrated by an example

  1. Case fault analysis for the mirror fusion test facility (MFTF) magnet system

    International Nuclear Information System (INIS)

    Baldi, R.W.; Poniktera, C.D.

    1979-03-01

    This report describes the stress analysis performed to determine the criticality of selected failures in the magnet case, jacket, and intercoil member. The selected faults were idealized by adding additional nodes coincidental to existing nodes in the baseline finite element model and changing fault boundary plate connectivities. No attempt was made to alter the analysis mesh size adjacent to any fault as this degree of effort was beyond the intent and scope of this task. Results of this analysis indicated that two of the five faults analyzed would be catastrophic in nature. Faults of this cateogry were: Fault No. 1 - A weld joint failure in the minor radius 3 to 5 inch plate inter section in the chamfer region at the centerline of symmetry. Fault No. 5 - Failuree of the 3 to 5 inch transition butt weld joint at the major to minor radius transition on the magnet case top plate

  2. Parallel, Multigrid Finite Element Simulator for Fractured/Faulted and Other Complex Reservoirs based on Common Component Architecture (CCA)

    Energy Technology Data Exchange (ETDEWEB)

    Milind Deo; Chung-Kan Huang; Huabing Wang

    2008-08-31

    Black-oil, compositional and thermal simulators have been developed to address different physical processes in reservoir simulation. A number of different types of discretization methods have also been proposed to address issues related to representing the complex reservoir geometry. These methods are more significant for fractured reservoirs where the geometry can be particularly challenging. In this project, a general modular framework for reservoir simulation was developed, wherein the physical models were efficiently decoupled from the discretization methods. This made it possible to couple any discretization method with different physical models. Oil characterization methods are becoming increasingly sophisticated, and it is possible to construct geologically constrained models of faulted/fractured reservoirs. Discrete Fracture Network (DFN) simulation provides the option of performing multiphase calculations on spatially explicit, geologically feasible fracture sets. Multiphase DFN simulations of and sensitivity studies on a wide variety of fracture networks created using fracture creation/simulation programs was undertaken in the first part of this project. This involved creating interfaces to seamlessly convert the fracture characterization information into simulator input, grid the complex geometry, perform the simulations, and analyze and visualize results. Benchmarking and comparison with conventional simulators was also a component of this work. After demonstration of the fact that multiphase simulations can be carried out on complex fracture networks, quantitative effects of the heterogeneity of fracture properties were evaluated. Reservoirs are populated with fractures of several different scales and properties. A multiscale fracture modeling study was undertaken and the effects of heterogeneity and storage on water displacement dynamics in fractured basements were investigated. In gravity-dominated systems, more oil could be recovered at a given pore

  3. A parallel solution for high resolution histological image analysis.

    Science.gov (United States)

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Tools for functional analysis of faults and methods of fault-stable motion control

    International Nuclear Information System (INIS)

    Timofeev, A.V.

    2003-01-01

    In this article a big attention is given to the problems of functional diagnostics, when control and faults diagnostics are made in real time simultaneously in the process of functioning of controlled dynamical systems

  5. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  6. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  7. Application of subset simulation methods to dynamic fault tree analysis

    International Nuclear Information System (INIS)

    Liu Mengyun; Liu Jingquan; She Ding

    2015-01-01

    Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)

  8. Fault tree analysis on BWR core spray system

    International Nuclear Information System (INIS)

    Watanabe, Norio

    1982-06-01

    Fault Trees which describe the failure modes for the Core Spray System function in the Browns Ferry Nuclear Plant (BWR 1065MWe) were developed qualitatively and quantitatively. The unavailability for the Core Spray System was estimated to be 1.2 x 10 - 3 /demand. It was found that the miscalibration of four reactor pressure sensors or the failure to open of the two inboard valves (FCV 75-25 and 75-53) could reduce system reliability significantly. It was recommended that the pressure sensors would be calibrated independently. The introduction of the redundant inboard valves could improve the system reliability. Thus this analysis method was verified useful for system analysis. The detailed test and maintenance manual and the informations on the control logic circuits of each active component are necessary for further analysis. (author)

  9. Enterprise architecture availability analysis using fault trees and stakeholder interviews

    Science.gov (United States)

    Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias

    2014-01-01

    The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.

  10. TU-AB-BRD-03: Fault Tree Analysis

    International Nuclear Information System (INIS)

    Dunscombe, P.

    2015-01-01

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before a failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to

  11. TU-AB-BRD-03: Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dunscombe, P. [University of Calgary (Canada)

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before a failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to

  12. Analysis and implementation of LLC-T series parallel resonant ...

    African Journals Online (AJOL)

    A prototype 300 W, 100 kHz converter is designed and built to experimentally demonstrate, dynamic and steady state performance for the LLC-T series parallel resonant converter. A comparative study is performed between experimental results and the simulation studies. The analysis shows that the output of converter is ...

  13. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  14. Fault Analysis and Detection in Microgrids with High PV Penetration

    Energy Technology Data Exchange (ETDEWEB)

    El Khatib, Mohamed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hernandez Alvidrez, Javier [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ellis, Abraham [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgrid modes of operation.

  15. Problems with earth fault detecting relays assigned to parallel cables or overhead lines; Probleme bei der Erdschlussortung mit wattmetrischen Erdschlussrichtungsrelais bei parallelen Kabeln oder Leitungen

    Energy Technology Data Exchange (ETDEWEB)

    Birkner, P.; Foerg, R. [Lech-Elektrizitaetswerke AG, Augsburg (Germany)

    1998-06-29

    For practical conditions one can find currents in underground electrical conductors like cable coverings earthed on both sides. As an example these currents are due to the alternating current system of the railroad or to the alternating current system of a Peterson coil, that tries to find a minimum resistance way from the transformer station to the place of the earth fault. Currents like these create a series voltage in the cable by inductive coupling. The voltage depends on the type and the length of the cable. The series voltages of all three phases form a zero sequence system. Taking into consideration that two cable systems running parallel to another, under certain circumstances it is possible to achieve a circulating zero sequence current. Additionally there is a shift voltage between the neutral point and the earth in the case of an earth fault in another place in the grid. The combination of these two factors can cause a malfunction of the earth fault detecting relays that are assigned to the parallel cable system. (orig.) [Deutsch] Im Erdreich vorhandene elektrische Leiter, z.B. die beidseitig geerdeten Schirme von Energiekabeln, werden in der Praxis nicht selten von Stroemen beaufschlagt. Dabei kann es sich z.B. auch um den Wechselstrom einer Petersenspule, der sich im Erdschlussfall einen widerstandsminimierten Weg vom Umspannwerk zur Fehlerstelle sucht, handeln. Ueber induktive Einkopplung entsteht im Leiter des Kabels eine Laengsspannung. Deren Hoehe ist vom Kabeltyp und der Kabellaenge abhaengig. Liegt als Netzkonfiguration eine Doppelleitung vor, die parallel betrieben wird, so koennen sich unter gewissen Randbedingungen kreisende Nullstroeme ausbilden. Diese wiederum koennen bei Vorhandensein einer Verlagerungsspannung zu einem Fehlansprechen von wattmetrischen Erdschlussrichtungsrelais fuehren. (orig.)

  16. Kinematic Analysis and Performance Evaluation of Novel PRS Parallel Mechanism

    Science.gov (United States)

    Balaji, K.; Khan, B. Shahul Hamid

    2018-02-01

    In this paper, a 3 DoF (Degree of Freedom) novel PRS (Prismatic-Revolute- Spherical) type parallel mechanisms has been designed and presented. The combination of striaght and arc type linkages for 3 DOF parallel mechanism is introduced for the first time. The performances of the mechanisms are evaluated based on the indices such as Minimum Singular Value (MSV), Condition Number (CN), Local Conditioning Index (LCI), Kinematic Configuration Index (KCI) and Global Conditioning Index (GCI). The overall reachable workspace of all mechanisms are presented. The kinematic measure, dexterity measure and workspace analysis for all the mechanism have been evaluated and compared.

  17. The use of outcrop data in fault prediction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Steen, Oeystein

    1997-12-31

    This thesis begins by describing deformation structures formed by gravitational sliding in partially lithified sediments by studying the spatial variation in frequency of deformation structures, as well as their geometries and kinematics, the sequential development of an ancient slide is outlined. This study brings to light a complex deformation history which was associated with block gliding, involving folding, listric faulting, small-scale boudinage and clastic dyke injection. The collapse deformation which is documented in the basal part of a gliding sheet is described for the first time. Further, rift-related normal faults formed in a continental sequence of normal beds are described and there is a focus on the scaling behaviour of faults in variably cemented sandstones. It is shown that the displacement population coefficients of faults are influenced by the local lithology and hence scaling of faults is not uniform on all scales and is variable in different parts of a rock volume. The scaling behaviour of small faults is linked to mechanical heterogeneities in the rock and to the deformation style. It is shown that small faults occur in an aureole around larger faults. Strain and scaling of the small faults were measured in different structural positions relative to the major faults. The local strain field is found to be variable and can be correlated with drag folding along the master faults. A modeling approach is presented for prediction of small faults in a hydrocarbon reservoir. By modeling an outcrop bedding surface on a seismic workstation, outcrop data could be compared with seismic data. Further, well data were used to test the relationships inferred from the analogue outcrops. The study shows that seismic ductile strain can be correlated with the distribution of small faults. Moreover, the use of horizontal structural well data is shown to calibrate the structural interpretation of faulted seismic horizons. 133 refs., 64 figs., 3 tabs.

  18. State-plane analysis of parallel resonant converter

    Science.gov (United States)

    Oruganti, R.; Lee, F. C.

    1985-01-01

    A method for analyzing the complex operation of a parallel resonant converter is developed, utilizing graphical state-plane techniques. The comprehensive mode analysis uncovers, for the first time, the presence of other complex modes besides the continuous conduction mode and the discontinuous conduction mode and determines their theoretical boundaries. Based on the insight gained from the analysis, a novel, high-frequency resonant buck converter is proposed. The voltage conversion ratio of the new converter is almost independent of load.

  19. Fault diagnosis of rolling bearings based on multifractal detrended fluctuation analysis and Mahalanobis distance criterion

    Science.gov (United States)

    Lin, Jinshan; Chen, Qian

    2013-07-01

    Vibration data of faulty rolling bearings are usually nonstationary and nonlinear, and contain fairly weak fault features. As a result, feature extraction of rolling bearing fault data is always an intractable problem and has attracted considerable attention for a long time. This paper introduces multifractal detrended fluctuation analysis (MF-DFA) to analyze bearing vibration data and proposes a novel method for fault diagnosis of rolling bearings based on MF-DFA and Mahalanobis distance criterion (MDC). MF-DFA, an extension of monofractal DFA, is a powerful tool for uncovering the nonlinear dynamical characteristics buried in nonstationary time series and can capture minor changes of complex system conditions. To begin with, by MF-DFA, multifractality of bearing fault data was quantified with the generalized Hurst exponent, the scaling exponent and the multifractal spectrum. Consequently, controlled by essentially different dynamical mechanisms, the multifractality of four heterogeneous bearing fault data is significantly different; by contrast, controlled by slightly different dynamical mechanisms, the multifractality of homogeneous bearing fault data with different fault diameters is significantly or slightly different depending on different types of bearing faults. Therefore, the multifractal spectrum, as a set of parameters describing multifractality of time series, can be employed to characterize different types and severity of bearing faults. Subsequently, five characteristic parameters sensitive to changes of bearing fault conditions were extracted from the multifractal spectrum and utilized to construct fault features of bearing fault data. Moreover, Hilbert transform based envelope analysis, empirical mode decomposition (EMD) and wavelet transform (WT) were utilized to study the same bearing fault data. Also, the kurtosis and the peak levels of the EMD or the WT component corresponding to the bearing tones in the frequency domain were carefully checked

  20. Timing analysis of safety properties using fault trees with time dependencies and timed state-charts

    International Nuclear Information System (INIS)

    Magott, Jan; Skrobanek, Pawel

    2012-01-01

    Behavior in time domain is often crucial for safety critical systems. Standard fault trees cannot express time-dependent behavior. In the paper, timing analysis of safety properties using fault trees with time dependencies (FTTDs) and timed state-charts is presented. A new version of timed state-charts (TSCs) is also proposed. These state-charts can model the dynamics of technical systems, e.g. controllers, controlled objects, and people. In TSCs, activity and communication times are represented by time intervals. In the proposed approach the structure of FTTD is fixed by a human. Time properties of events and gates of FTTD are expressed by time intervals, and are calculated using TSCs. The minimal and maximal values of these time intervals of FTTD can be calculated by finding paths with minimal and maximal time lengths in TSCs, which is an NP-hard problem. In order to reduce the practical complexity of computing the FTTD time parameters, some reductions of TSCs are defined in the paper, such as sequential, alternative, loop (iteration), and parallel. Some of the reductions are intuitive, in case of others—theorems are required. Computational complexity of each reduction is not greater than linear in the size of reduced TSC. Therefore, the obtained results enable decreasing of the costs of FTTD time parameters calculation when system dynamics is expressed by TSCs. Case study of a railroad crossing with a controller that controls semaphores, gate, light-audio signal close to the gate will be analyzed.

  1. Kinematic analysis of parallel manipulators by algebraic screw theory

    CERN Document Server

    Gallardo-Alvarado, Jaime

    2016-01-01

    This book reviews the fundamentals of screw theory concerned with velocity analysis of rigid-bodies, confirmed with detailed and explicit proofs. The author additionally investigates acceleration, jerk, and hyper-jerk analyses of rigid-bodies following the trend of the velocity analysis. With the material provided in this book, readers can extend the theory of screws into the kinematics of optional order of rigid-bodies. Illustrative examples and exercises to reinforce learning are provided. Of particular note, the kinematics of emblematic parallel manipulators, such as the Delta robot as well as the original Gough and Stewart platforms are revisited applying, in addition to the theory of screws, new methods devoted to simplify the corresponding forward-displacement analysis, a challenging task for most parallel manipulators. Stands as the only book devoted to the acceleration, jerk and hyper-jerk (snap) analyses of rigid-body by means of screw theory; Provides new strategies to simplify the forward kinematic...

  2. Framework for Interactive Parallel Dataset Analysis on the Grid

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, David A.; Ananthan, Balamurali; /Tech-X Corp.; Johnson, Tony; Serbo, Victor; /SLAC

    2007-01-10

    We present a framework for use at a typical Grid site to facilitate custom interactive parallel dataset analysis targeting terabyte-scale datasets of the type typically produced by large multi-institutional science experiments. We summarize the needs for interactive analysis and show a prototype solution that satisfies those needs. The solution consists of desktop client tool and a set of Web Services that allow scientists to sign onto a Grid site, compose analysis script code to carry out physics analysis on datasets, distribute the code and datasets to worker nodes, collect the results back to the client, and to construct professional-quality visualizations of the results.

  3. Parallel algorithms for nuclear reactor analysis via domain decomposition method

    International Nuclear Information System (INIS)

    Kim, Yong Hee

    1995-02-01

    In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when

  4. Reliability analysis of the reactor protection system with fault diagnosis

    International Nuclear Information System (INIS)

    Lee, D.Y.; Han, J.B.; Lyou, J.

    2004-01-01

    The main function of a reactor protection system (RPS) is to maintain the reactor core integrity and reactor coolant system pressure boundary. The RPS consists of the 2-out-of-m redundant architecture to assure a reliable operation. The system reliability of the RPS is a very important factor for the probability safety assessment (PSA) evaluation in the nuclear field. To evaluate the system failure rate of the k-out-of-m redundant system is not so easy with the deterministic method. In this paper, the reliability analysis method using the binomial process is suggested to calculate the failure rate of the RPS system with a fault diagnosis function. The suggested method is compared with the result of the Markov process to verify the validation of the suggested method, and applied to the several kinds of RPS architectures for a comparative evaluation of the reliability. (orig.)

  5. Application of fault tree analysis to fuel cell diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Yousfi Steiner, N.; Mocoteguy, P. [European Institute for Energy Research (EIFER), Karlsruhe (Germany); Hissel, D. [FEMTO-ST/ENISYS/FC LAB, UMR CNRS 6174, University of Franche-Comte, Belfort (France); Candusso, D. [IFSTTAR/FC LAB, Institute of Science and Technology for Transport, Development and Networks, Belfort (France); Marra, D.; Pianese, C.; Sorrentino, M. [Department of Industrial Engineering, University of Salerno, Fisciano (Italy)

    2012-04-15

    Reliability and lifetime are common issues for the development and commercialization of fuel cells technologies'. As a consequence, their improvement is a major challenge and the last decade has experienced a growing interest in activities that aims at understanding the degradation mechanisms and at developing fuel cell systems diagnosis tools. Fault Tree Analysis (FTA) is one of the deductive tools that allow ''linking'' an undesired state to a combination of lower-level events via a ''top-down'' approach which is mainly used in safety and reliability engineering. The objective of this paper is to give an overview of the use and the contribution of FTA to both SOFC and PEFC diagnosis. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  6. Fault tree synthesis for software design analysis of PLC based safety-critical systems

    International Nuclear Information System (INIS)

    Koo, S. R.; Cho, C. H.; Seong, P. H.

    2006-01-01

    As a software verification and validation should be performed for the development of PLC based safety-critical systems, a software safety analysis is also considered in line with entire software life cycle. In this paper, we propose a technique of software safety analysis in the design phase. Among various software hazard analysis techniques, fault tree analysis is most widely used for the safety analysis of nuclear power plant systems. Fault tree analysis also has the most intuitive notation and makes both qualitative and quantitative analyses possible. To analyze the design phase more effectively, we propose a technique of fault tree synthesis, along with a universal fault tree template for the architecture modules of nuclear software. Consequently, we can analyze the safety of software on the basis of fault tree synthesis. (authors)

  7. Quantitative analysis of a fault tree with priority AND gates

    International Nuclear Information System (INIS)

    Yuge, T.; Yanagi, S.

    2008-01-01

    A method for calculating the exact top event probability of a fault tree with priority AND gates and repeated basic events is proposed when the minimal cut sets are given. A priority AND gate is an AND gate where the input events must occur in a prescribed order for the occurrence of the output event. It is known that the top event probability of such a dynamic fault tree is obtained by converting the tree into an equivalent Markov model. However, this method is not realistic for a complex system model because the number of states which should be considered in the Markov analysis increases explosively as the number of basic events increases. To overcome the shortcomings of the Markov model, we propose an alternative method to obtain the top event probability in this paper. We assume that the basic events occur independently, exponentially distributed, and the component whose failure corresponds to the occurrence of the basic event is non-repairable. First, we obtain the probability of occurrence of the output event of a single priority AND gate by Markov analysis. Then, the top event probability is given by a cut set approach and the inclusion-exclusion formula. An efficient procedure to obtain the probabilities corresponding to logical products in the inclusion-exclusion formula is proposed. The logical product which is composed of two or more priority AND gates having at least one common basic event as their inputs is transformed into the sum of disjoint events which are equivalent to a priority AND gate in the procedure. Numerical examples show that our method works well for complex systems

  8. Seismic Hazard Analysis on a Complex, Interconnected Fault Network

    Science.gov (United States)

    Page, M. T.; Field, E. H.; Milner, K. R.

    2017-12-01

    In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.

  9. Rotor current transient analysis of DFIG-based wind turbines during symmetrical voltage faults

    International Nuclear Information System (INIS)

    Ling, Yu; Cai, Xu; Wang, Ningbo

    2013-01-01

    Highlights: • We theoretically analyze the rotor fault current of DFIG based on space vector. • The presented analysis is simple, easy to understand. • The analysis highlights the accuracy of the expression of the rotor fault currents. • The expression can be widely used to analyze the different levels of voltage symmetrical fault. • Simulation results show the accuracy of the expression of the rotor currents. - Abstract: The impact of grid voltage fault on doubly fed induction generators (DFIGs), especially rotor currents, has received much attention. So, in this paper, the rotor currents of based-DFIG wind turbines are considered in a generalized way, which can be widely used to analyze the cases under different levels of voltage symmetrical faults. A direct method based on space vector is proposed to obtain an accurate expression of rotor currents as a function of time for symmetrical voltage faults in the power system. The presented theoretical analysis is simple and easy to understand and especially highlights the accuracy of the expression. Finally, the comparable simulations evaluate this analysis and show that the expression of the rotor currents is sufficient to calculate the maximum fault current, DC and AC components, and especially helps to understand the causes of the problem and as a result, contributes to adapt reasonable approaches to enhance the fault ride through (FRT) capability of DFIG wind turbines during a voltage fault

  10. Delineation of Urban Active Faults Using Multi-scale Gravity Analysis in Shenzhen, South China

    Science.gov (United States)

    Xu, C.; Liu, X.

    2015-12-01

    In fact, many cities in the world are established on the active faults. As the rapid urban development, thousands of large facilities, such as ultrahigh buildings, supersized bridges, railway, and so on, are built near or on the faults, which may change the balance of faults and induce urban earthquake. Therefore, it is significant to delineate effectively the faults for urban planning construction and social sustainable development. Due to dense buildings in urban area, the ordinary approaches to identify active faults, like geological survey, artificial seismic exploration and electromagnetic exploration, are not convenient to be carried out. Gravity, reflecting the mass distribution of the Earth's interior, provides a more efficient and convenient method to delineate urban faults. The present study is an attempt to propose a novel gravity method, multi-scale gravity analysis, for identifying urban active faults and determining their stability. Firstly, the gravity anomalies are decomposed by wavelet multi-scale analysis. Secondly, based on the decomposed gravity anomalies, the crust is layered and the multilayer horizontal tectonic stress is inverted. Lastly, the decomposed anomalies and the inverted horizontal tectonic stress are used to infer the distribution and stability of main active faults. For validating our method, a case study on active faults in Shenzhen City is processed. The results show that the distribution of decomposed gravity anomalies and multilayer horizontal tectonic stress are controlled significantly by the strike of the main faults and can be used to infer depths of the faults. The main faults in Shenzhen may range from 4km to 20km in the depth. Each layer of the crust is nearly equipressure since the horizontal tectonic stress has small amplitude. It indicates that the main faults in Shenzhen are relatively stable and have no serious impact on planning and construction of the city.

  11. Geological analysis of paleozoic large-scale faulting in the south-central Pyrenees

    OpenAIRE

    Speksnijder, A.

    1986-01-01

    Detailed structural and sedimentological analysis reveals the existence of an east-west directed fundamental fault zone in the south-central Pyrenees, which has been intermittently active from (at least) the Devonian on. Emphasis is laid on the stUdy of fault-bounded post-Variscan (StephanoPermian) sedimentary basins, and the influence of Late Paleozoic faulting on the underlying Variscan basement. The present structure of the basement is rather complex as it results from multiple Variscan an...

  12. Electromagnetic Transient Response Analysis of DFIG under Cascading Grid Faults Considering Phase Angel Jumps

    DEFF Research Database (Denmark)

    Wang, Yun; Wu, Qiuwei

    2014-01-01

    This paper analysis the electromagnetic transient response characteristics of DFIG under symmetrical and asymmetrical cascading grid fault conditions considering phaseangel jump of grid. On deriving the dynamic equations of the DFIG with considering multiple constraints on balanced and unbalanced...... conditions, phase angel jumps, interval of cascading fault, electromagnetic transient characteristics, the principle of the DFIG response under cascading voltage fault can be extract. The influence of grid angel jump on the transient characteristic of DFIG is analyzed and electromagnetic response...

  13. Locality-Driven Parallel Static Analysis for Power Delivery Networks

    KAUST Repository

    Zeng, Zhiyu

    2011-06-01

    Large VLSI on-chip Power Delivery Networks (PDNs) are challenging to analyze due to the sheer network complexity. In this article, a novel parallel partitioning-based PDN analysis approach is presented. We use the boundary circuit responses of each partition to divide the full grid simulation problem into a set of independent subgrid simulation problems. Instead of solving exact boundary circuit responses, a more efficient scheme is proposed to provide near-exact approximation to the boundary circuit responses by exploiting the spatial locality of the flip-chip-type power grids. This scheme is also used in a block-based iterative error reduction process to achieve fast convergence. Detailed computational cost analysis and performance modeling is carried out to determine the optimal (or near-optimal) number of partitions for parallel implementation. Through the analysis of several large power grids, the proposed approach is shown to have excellent parallel efficiency, fast convergence, and favorable scalability. Our approach can solve a 16-million-node power grid in 18 seconds on an IBM p5-575 processing node with 16 Power5+ processors, which is 18.8X faster than a state-of-the-art direct solver. © 2011 ACM.

  14. Automated Bearing Fault Diagnosis Using 2D Analysis of Vibration Acceleration Signals under Variable Speed Conditions

    Directory of Open Access Journals (Sweden)

    Sheraz Ali Khan

    2016-01-01

    Full Text Available Traditional fault diagnosis methods of bearings detect characteristic defect frequencies in the envelope power spectrum of the vibration signal. These defect frequencies depend upon the inherently nonstationary shaft speed. Time-frequency and subband signal analysis of vibration signals has been used to deal with random variations in speed, whereas design variations require retraining a new instance of the classifier for each operating speed. This paper presents an automated approach for fault diagnosis in bearings based upon the 2D analysis of vibration acceleration signals under variable speed conditions. Images created from the vibration signals exhibit unique textures for each fault, which show minimal variation with shaft speed. Microtexture analysis of these images is used to generate distinctive fault signatures for each fault type, which can be used to detect those faults at different speeds. A k-nearest neighbor classifier trained using fault signatures generated for one operating speed is used to detect faults at all the other operating speeds. The proposed approach is tested on the bearing fault dataset of Case Western Reserve University, and the results are compared with those of a spectrum imaging-based approach.

  15. BACFIRE, Minimal Cut Sets Common Cause Failure Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: BACFIRE, designed to aid in common cause failure analysis, searches among the basic events of a minimal cut set of the system logic model for common potential causes of failure. The potential cause of failure is called a qualitative failure characteristics. The algorithm searches qualitative failure characteristics (that are part of the program input) of the basic events contained in a set to find those characteristics common to all basic events. This search is repeated for all cut sets input to the program. Common cause failure analysis is thereby performed without inclusion of secondary failure in the system logic model. By using BACFIRE, a common cause failure analysis can be added to an existing system safety and reliability analysis. 2 - Method of solution: BACFIRE searches the qualitative failure characteristics of the basic events contained in the fault tree minimal cut set to find those characteristics common to all basic events by either of two criteria. The first criterion can be met if all the basic events in a minimal cut set are associated by a condition which alone may increase the probability of multiple component malfunction. The second criterion is met if all the basic events in a minimal cut set are susceptible to the same secondary failure cause and are located in the same domain for that cause of secondary failure. 3 - Restrictions on the complexity of the problem - Maxima of: 1001 secondary failure maps, 101 basic events, 10 cut sets

  16. Collection and analysis of existing information on applicability of investigation methods for estimation of beginning age of faulting in present faulting pattern

    International Nuclear Information System (INIS)

    Doke, Ryosuke; Yasue, Ken-ichi; Tanikawa, Shin-ichi; Nakayasu, Akio; Niizato, Tadafumi; Tanaka, Takenobu; Aoki, Michinori; Sekiya, Ayako

    2011-12-01

    In the field of R and D programs of a geological disposal of high level radioactive waste, it is great importance to develop a set of investigation and analysis techniques for the assessment of long-term geosphere stability over a geological time, which means that any changes of geological environment will not significantly impact on the long-term safety of a geological disposal system. In Japanese archipelago, crustal movements are so active that uplift and subsidence are remarkable in recent several hundreds of thousands of years. Therefore, it is necessary to assess the long-term geosphere stability taking into account a topographic change caused by crustal movements. One of the factors for the topographic change is the movement of an active fault, which is a geological process to release a strain accumulated by plate motion. A beginning age of the faulting in the present faulting pattern suggests the beginning age of neotectonic activities around the active fault, and also provides basic information to identifying the stage of a geomorphic development of mountains. Therefore, the age of faulting in the present faulting pattern is important information to estimate a topographic change in the future on the mountain regions of Japan. In this study, existing information related to methods for the estimation of the beginning age of the faulting in the present faulting pattern on the active fault were collected and reviewed. A principle of method, noticing points and technical know-hows in the application of the methods, data uncertainty, and so on were extracted from the existing information. Based on these extracted information, task-flows indicating working process on the estimation of the beginning age for the faulting of the active fault were illustrated on each method. Additionally, the distribution map of the beginning age with accuracy of faulting in the present faulting pattern on the active fault was illustrated. (author)

  17. Empirical analysis of change metrics for software fault prediction

    NARCIS (Netherlands)

    Choudhary, Garvit Rajesh; Kumar, Sandeep; Kumar, Kuldeep; Mishra, Alok; Catal, Cagatay

    2018-01-01

    A quality assurance activity, known as software fault prediction, can reduce development costs and improve software quality. The objective of this study is to investigate change metrics in conjunction with code metrics to improve the performance of fault prediction models. Experimental studies are

  18. Stochastic Fault Analysis of Balanced Systems | Ekwue | Nigerian ...

    African Journals Online (AJOL)

    A sequence coordinates approach for fault calculations is extended to take into account the uncertainty of the network input data. The probability of a fault current on a bus exceeding its short circuit current is determined. These results would be of importance in determining the protective philosophy of any network.

  19. Flow meter fault isolation in building central chilling systems using wavelet analysis

    International Nuclear Information System (INIS)

    Chen Youming; Hao Xiaoli; Zhang Guoqiang; Wang Shengwei

    2006-01-01

    This paper presents an approach to isolate flow meter faults in building central chilling systems. It mathematically explains the fault collinearity among the flow meters in central chilling systems and points out that the sensor validation index (SVI) used in principal component analysis (PCA) is incapable of isolating flow meter faults due to the fault collinearity. The wavelet transform is used to isolate the flow meter faults as a substitute for the SVI of PCA. This approach can identify various variations in measuring signals, such as ramp, step, discontinuity etc., due to the good property of the wavelet in local time-frequency. Some examples are given to demonstrate its ability of fault isolation for the flow meters

  20. Degradation Assessment and Fault Diagnosis for Roller Bearing Based on AR Model and Fuzzy Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Lingli Jiang

    2011-01-01

    Full Text Available This paper proposes a new approach combining autoregressive (AR model and fuzzy cluster analysis for bearing fault diagnosis and degradation assessment. AR model is an effective approach to extract the fault feature, and is generally applied to stationary signals. However, the fault vibration signals of a roller bearing are non-stationary and non-Gaussian. Aiming at this problem, the set of parameters of the AR model is estimated based on higher-order cumulants. Consequently, the AR parameters are taken as the feature vectors, and fuzzy cluster analysis is applied to perform classification and pattern recognition. Experiments analysis results show that the proposed method can be used to identify various types and severities of fault bearings. This study is significant for non-stationary and non-Gaussian signal analysis, fault diagnosis and degradation assessment.

  1. Plotting and analysis of fault trees in safety evaluation of nuclear power plants

    International Nuclear Information System (INIS)

    Wild, A.

    1979-12-01

    Fault tree analysis is a useful tool in determining the safety and reliability of nuclear power plants. The main strength of the fault tree method, its ability to detect cross-links between systems, can be used only if fault trees are constructed for complete nuclear generating stations. Such trees are large and have to be handled by computers. A system is described for handling fault trees using small computers such as the HP-1000 with disc drive, graphics terminal and x-y plotter

  2. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  3. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  4. Model based fault diagnosis in a centrifugal pump application using structural analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  5. Model Based Fault Diagnosis in a Centrifugal Pump Application using Structural Analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  6. Automatic mechanical fault assessment of small wind energy systems in microgrids using electric signature analysis

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Marhadi, Kun Saptohartyadi; Jensen, Bogi Bech

    2013-01-01

    of islanded operation. In this paper, the fault assessment is achieved efficiently and consistently via electric signature analysis (ESA). In ESA the fault related frequency components are manifested as sidebands of the existing current and voltage time harmonics. The energy content between the fundamental, 5...

  7. Geological analysis of paleozoic large-scale faulting in the south-central Pyrenees

    NARCIS (Netherlands)

    Speksnijder, A.

    1986-01-01

    Detailed structural and sedimentological analysis reveals the existence of an east-west directed fundamental fault zone in the south-central Pyrenees, which has been intermittently active from (at least) the Devonian on. Emphasis is laid on the stUdy of fault-bounded post-Variscan

  8. Geological analysis of paleozoic large-scale faulting in the south-central Pyrenees

    NARCIS (Netherlands)

    Speksnijder, A.

    1986-01-01

    Detailed structural and sedimentological analysis reveals the existence of an east-west directed fundamental fault zone in the south-central Pyrenees, which has been intermittently active from (at least) the Devonian on. Emphasis is laid on the stUdy of fault-bounded post-Variscan (StephanoPermian)

  9. Logical Specification and Analysis of Fault Tolerant Systems through Partial Model Checking

    NARCIS (Netherlands)

    Gnesi, S.; Etalle, Sandro; Mukhopadhyay, S.; Lenzini, Gabriele; Lenzini, G.; Martinelli, F.; Roychoudhury, A.

    2003-01-01

    This paper presents a framework for a logical characterisation of fault tolerance and its formal analysis based on partial model checking techniques. The framework requires a fault tolerant system to be modelled using a formal calculus, here the CCS process algebra. To this aim we propose a uniform

  10. Parallelization of the Physical-Space Statistical Analysis System (PSAS)

    Science.gov (United States)

    Larson, J. W.; Guo, J.; Lyster, P. M.

    1999-01-01

    Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational

  11. Large-coil-test-facility fault-tree analysis

    International Nuclear Information System (INIS)

    1982-01-01

    An operating-safety study is being conducted for the Large Coil Test Facility (LCTF). The purpose of this study is to provide the facility operators and users with added insight into potential problem areas that could affect the safety of personnel or the availability of equipment. This is a preliminary report, on Phase I of that study. A central feature of the study is the incorporation of engineering judgements (by LCTF personnel) into an outside, overall view of the facility. The LCTF was analyzed in terms of 32 subsystems, each of which are subject to failure from any of 15 generic failure initiators. The study identified approximately 40 primary areas of concern which were subjected to a computer analysis as an aid in understanding the complex subsystem interactions that can occur within the facility. The study did not analyze in detail the internal structure of the subsystems at the individual component level. A companion study using traditional fault tree techniques did analyze approximately 20% of the LCTF at the component level. A comparison between these two analysis techniques is included in Section 7

  12. Public transport risk assessment through fault tree analysis

    Directory of Open Access Journals (Sweden)

    Z. Yaghoubpour

    2016-04-01

    Full Text Available This study focused on the public transport risk assessment in District one of ​​Tehran through Fault Tree Analysis involving the three criteria of human, vehicle and road in Haddon matrix. In fact, it examined the factors contributing to the occurrence of road accidents at several urban black spots within District 1. Relying on road safety checklists and survey of experts, this study made an effort to help urban managers to assess the risks in the public transport and prevent road accidents. Finally, the risk identification and assessment of public transport in District one yielded several results to answer the research questions. The hypotheses analysis suggested that safety issues involved in public transport are concerned by urban managers. The key reactive measures are investigation of accidents, identification of causes and correction of black spots. In addition to high costs, however, the reactive measures give rise to multiple operational problems such as traffic navigation and guaranteeing user safety in every operation. The case study highlighted the same fact. The macro-level management in the metropolis of Tehran is critical. The urban road casualties and losses can be curtailed by preventive measures such as continuous assessment of road safety.

  13. Fault Analysis for Protection Purposes in Maritime Applications

    DEFF Research Database (Denmark)

    Ciontea, Catalin-Iosif; Bak, Claus Leth; Blaabjerg, Frede

    2016-01-01

    in different locations of the network. The fault current is measured using the curent transformers that are already present in the system as part of a time-inverse overcurrent protection. Simulation results show that the symmetrical components of the currents seen by these current transformers can be used...... to detect the electric fault. The method also provides an improved fault detection over the conventional overcurrent relays in some situations. All results are obtain using MATLAB/Simulink and are briefly discussed in this paper....

  14. Analysis of series resonant converter with series-parallel connection

    Science.gov (United States)

    Lin, Bor-Ren; Huang, Chien-Lan

    2011-02-01

    In this study, a parallel inductor-inductor-capacitor (LLC) resonant converter series-connected on the primary side and parallel-connected on the secondary side is presented for server power supply systems. Based on series resonant behaviour, the power metal-oxide-semiconductor field-effect transistors are turned on at zero voltage switching and the rectifier diodes are turned off at zero current switching. Thus, the switching losses on the power semiconductors are reduced. In the proposed converter, the primary windings of the two LLC converters are connected in series. Thus, the two converters have the same primary currents to ensure that they can supply the balance load current. On the output side, two LLC converters are connected in parallel to share the load current and to reduce the current stress on the secondary windings and the rectifier diodes. In this article, the principle of operation, steady-state analysis and design considerations of the proposed converter are provided and discussed. Experiments with a laboratory prototype with a 24 V/21 A output for server power supply were performed to verify the effectiveness of the proposed converter.

  15. Block-Parallel Data Analysis with DIY2

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Peterka, Tom [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-30

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial, parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.

  16. Microseismic Analysis of Fracture of an Intact Rock Asperity Traversing a Sawcut Fault

    Science.gov (United States)

    Mclaskey, G.; Lockner, D. A.

    2017-12-01

    Microseismic events carry information related to stress state, fault geometry, and other subsurface properties, but their relationship to large and potentially damaging earthquakes is not well defined. We conducted laboratory rock mechanics experiments that highlight the interaction between a sawcut fault and an asperity composed of an intact rock "pin". The sample is a 76 mm diameter cylinder of Westerly granite with a 21 mm diameter cylinder (the pin) of intact Westerly granite that crosses the sawcut fault. Upon loading to 80 MPa in a triaxial machine, we first observed a slip event that ruptured the sawcut fault, slipped about 35 mm, but was halted by the rock pin. With continued loading, the rock pin failed in a swarm of thousands of M -7 seismic events similar to the localized microcracking that occurs during the final fracture nucleation phase in an intact rock sample. Once the pin was fractured to a critical point, it permitted complete rupture events on the sawcut fault (stick-slip instabilities). No seismicity was detected on the sawcut fault plane until the pin was sheared. Subsequent slip events were preceded by 10s of foreshocks, all located on the fault plane. We also identified an aseismic zone on the fault plane surrounding the fractured rock pin. A post-mortem analysis of the sample showed a thick gouge layer where the pin intersected the fault, suggesting that this gouge propped open the fault and prevented microseismic events in its vicinity. This experiment is an excellent case study in microseismicity since the events separate neatly into three categories: slip on the sawcut fault, fracture of the intact rock pin, and off-fault seismicity associated with pin-related rock joints. The distinct locations, timing, and focal mechanisms of the different categories of microseismic events allow us to study how their occurrence is related to the mechanics of the deforming rock.

  17. Seismic fragility analysis of a CANDU containment structure for near-fault ground motions

    International Nuclear Information System (INIS)

    Choi, In Kil; Choun, Young Sun; Seo, Jeong Moon; Ahn, Seong Moon

    2005-01-01

    The R. G. 1.60 spectrum used for the seismic design of Korean nuclear power plants provides a generally conservative design basis due to its broadband nature. A survey on some of the Quaternary fault segments near Korean nuclear power plants is ongoing. It is likely that these faults will be identified as active ones. If the faults are confirmed as active ones, it will be necessary to reevaluate the seismic safety of the nuclear power plants located near these faults. The probability based scenario earthquakes were identified as near-field earthquakes. In general, the near-fault ground motion records exhibit a distinctive long period pulse like time history with very high peak velocities. These features are induced by the slip of the earthquake fault. Near-fault ground motions, which have caused much of the damage in recent major earthquakes, can be characterized by a pulse-like motion that exposes the structure to a high input energy at the beginning of the motion. It is necessary to estimate the near-fault ground motion effects on the nuclear power plant structures and components located near the faults. In this study, the seismic fragility analysis of a CANDU containment structure was performed based on the results of nonlinear dynamic time-history analyses

  18. Structural Load Analysis of a Wind Turbine under Pitch Actuator and Controller Faults

    International Nuclear Information System (INIS)

    Etemaddar, Mahmoud; Gao, Zhen; Moan, Torgeir

    2014-01-01

    In this paper, we investigate the characteristics of a wind turbine under blade pitch angle and shaft speed sensor faults as well as pitch actuator faults. A land-based NREL 5MW variable speed pitch reg- ulated wind turbine is considered as a reference. The conventional collective blade pitch angle controller strategy with independent pitch actuators control is used for load reduction. The wind turbine class is IEC-BII. The main purpose is to investigate the severity of end effects on structural loads and responses and consequently identify the high-risk components according to the type and amplitude of fault using a servo-aero-elastic simulation code, HAWC2. Both transient and steady state effects of faults are studied. Such information is useful for wind turbine fault detection and identification as well as system reliability analysis. Results show the effects of faults on wind turbine power output and responses. Pitch sensor faults mainly affects the vibration of shaft main bearing, while generator power and aerodynamic thrust are not changed significantly, due to independent pitch actuator control of three blades. Shaft speed sensor faults can seriously affect the generator power and aerodynamic thrust. Pitch actuator faults can result in fully pitching of the blade, and consequently rotor stops due to negative aerodynamic torque

  19. Incipient Fault Detection and Isolation of Field Devices in Nuclear Power Systems Using Principal Component Analysis

    International Nuclear Information System (INIS)

    Kaistha, Nitin; Upadhyaya, Belle R.

    2001-01-01

    An integrated method for the detection and isolation of incipient faults in common field devices, such as sensors and actuators, using plant operational data is presented. The approach is based on the premise that data for normal operation lie on a surface and abnormal situations lead to deviations from the surface in a particular way. Statistically significant deviations from the surface result in the detection of faults, and the characteristic directions of deviations are used for isolation of one or more faults from the set of typical faults. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data and fit a hyperplane to the data. The fault direction for each of the scenarios is obtained using the singular value decomposition on the state and control function prediction errors, and fault isolation is then accomplished from projections on the fault directions. This approach is demonstrated for a simulated pressurized water reactor steam generator system and for a laboratory process control system under single device fault conditions. Enhanced fault isolation capability is also illustrated by incorporating realistic nonlinear terms in the PCA data matrix

  20. Using Earthquake Analysis to Expand the Oklahoma Fault Database

    Science.gov (United States)

    Chang, J. C.; Evans, S. C.; Walter, J. I.

    2017-12-01

    The Oklahoma Geological Survey (OGS) is compiling a comprehensive Oklahoma Fault Database (OFD), which includes faults mapped in OGS publications, university thesis maps, and industry-contributed shapefiles. The OFD includes nearly 20,000 fault segments, but the work is far from complete. The OGS plans on incorporating other sources of data into the OFD, such as new faults from earthquake sequence analyses, geologic field mapping, active-source seismic surveys, and potential fields modeling. A comparison of Oklahoma seismicity and the OFD reveals that earthquakes in the state appear to nucleate on mostly unmapped or unknown faults. Here, we present faults derived from earthquake sequence analyses. From 2015 to present, there has been a five-fold increase in realtime seismic stations in Oklahoma, which has greatly expanded and densified the state's seismic network. The current seismic network not only improves our threshold for locating weaker earthquakes, but also allows us to better constrain focal plane solutions (FPS) from first motion analyses. Using nodal planes from the FPS, HypoDD relocation, and historic seismic data, we can elucidate these previously unmapped seismogenic faults. As the OFD is a primary resource for various scientific investigations, the inclusion of seismogenic faults improves further derivative studies, particularly with respect to seismic hazards. Our primal focus is on four areas of interest, which have had M5+ earthquakes in recent Oklahoma history: Pawnee (M5.8), Prague (M5.7), Fairview (M5.1), and Cushing (M5.0). Subsequent areas of interest will include seismically active data-rich areas, such as the central and northcentral parts of the state.

  1. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  2. Effective damping for SSR analysis of parallel turbine-generators

    International Nuclear Information System (INIS)

    Agrawal, B.L.; Farmer, R.G.

    1988-01-01

    Damping is a dominant parameter in studies to determine SSR problem severity and countermeasure requirements. To reach valid conclusions for multi-unit plants, it is essential that the net effective damping of unequally loaded units be known. For the Palo Verde Nuclear Generating Station, extensive testing and analysis have been performed to verify and develop an accurate means of determining the effective damping of unequally loaded units in parallel. This has led to a unique and simple algorithm which correlates well with two other analytic techniques

  3. A dataflow analysis tool for parallel processing of algorithms

    Science.gov (United States)

    Jones, Robert L., III

    1993-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on a set of identical parallel processors. Typical applications include signal processing and control law problems. Graph analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool is shown to facilitate the application of the design process to a given problem.

  4. Evaluation of Apache Hadoop for parallel data analysis with ROOT

    International Nuclear Information System (INIS)

    Lehrack, S; Duckeck, G; Ebke, J

    2014-01-01

    The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.

  5. Evaluation of Apache Hadoop for parallel data analysis with ROOT

    Science.gov (United States)

    Lehrack, S.; Duckeck, G.; Ebke, J.

    2014-06-01

    The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.

  6. Correlation analysis of respiratory signals by using parallel coordinate plots.

    Science.gov (United States)

    Saatci, Esra

    2018-01-01

    The understanding of the bonds and the relationships between the respiratory signals, i.e. the airflow, the mouth pressure, the relative temperature and the relative humidity during breathing may provide the improvement on the measurement methods of respiratory mechanics and sensor designs or the exploration of the several possible applications in the analysis of respiratory disorders. Therefore, the main objective of this study was to propose a new combination of methods in order to determine the relationship between respiratory signals as a multidimensional data. In order to reveal the coupling between the processes two very different methods were used: the well-known statistical correlation analysis (i.e. Pearson's correlation and cross-correlation coefficient) and parallel coordinate plots (PCPs). Curve bundling with the number intersections for the correlation analysis, Least Mean Square Time Delay Estimator (LMS-TDE) for the point delay detection and visual metrics for the recognition of the visual structures were proposed and utilized in PCP. The number of intersections was increased when the correlation coefficient changed from high positive to high negative correlation between the respiratory signals, especially if whole breath was processed. LMS-TDE coefficients plotted in PCP indicated well-matched point delay results to the findings in the correlation analysis. Visual inspection of PCB by visual metrics showed range, dispersions, entropy comparisons and linear and sinusoidal-like relationships between the respiratory signals. It is demonstrated that the basic correlation analysis together with the parallel coordinate plots perceptually motivates the visual metrics in the display and thus can be considered as an aid to the user analysis by providing meaningful views of the data. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  8. Parallel factor analysis PARAFAC of process affected water

    Energy Technology Data Exchange (ETDEWEB)

    Ewanchuk, A.M.; Ulrich, A.C.; Sego, D. [Alberta Univ., Edmonton, AB (Canada). Dept. of Civil and Environmental Engineering; Alostaz, M. [Thurber Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    A parallel factor analysis (PARAFAC) of oil sands process-affected water was presented. Naphthenic acids (NA) are traditionally described as monobasic carboxylic acids. Research has indicated that oil sands NA do not fit classical definitions of NA. Oil sands organic acids have toxic and corrosive properties. When analyzed by fluorescence technology, oil sands process-affected water displays a characteristic peak at 290 nm excitation and approximately 346 nm emission. In this study, a parallel factor analysis (PARAFAC) was used to decompose process-affected water multi-way data into components representing analytes, chemical compounds, and groups of compounds. Water samples from various oil sands operations were analyzed in order to obtain EEMs. The EEMs were then arranged into a large matrix in decreasing process-affected water content for PARAFAC. Data were divided into 5 components. A comparison with commercially prepared NA samples suggested that oil sands NA is fundamentally different. Further research is needed to determine what each of the 5 components represent. tabs., figs.

  9. Line-to-Line Fault Analysis and Location in a VSC-Based Low-Voltage DC Distribution Network

    Directory of Open Access Journals (Sweden)

    Shi-Min Xue

    2018-03-01

    Full Text Available A DC cable short-circuit fault is the most severe fault type that occurs in DC distribution networks, having a negative impact on transmission equipment and the stability of system operation. When a short-circuit fault occurs in a DC distribution network based on a voltage source converter (VSC, an in-depth analysis and characterization of the fault is of great significance to establish relay protection, devise fault current limiters and realize fault location. However, research on short-circuit faults in VSC-based low-voltage DC (LVDC systems, which are greatly different from high-voltage DC (HVDC systems, is currently stagnant. The existing research in this area is not conclusive, with further study required to explain findings in HVDC systems that do not fit with simulated results or lack thorough theoretical analyses. In this paper, faults are divided into transient- and steady-state faults, and detailed formulas are provided. A more thorough and practical theoretical analysis with fewer errors can be used to develop protection schemes and short-circuit fault locations based on transient- and steady-state analytic formulas. Compared to the classical methods, the fault analyses in this paper provide more accurate computed results of fault current. Thus, the fault location method can rapidly evaluate the distance between the fault and converter. The analyses of error increase and an improved handshaking method coordinating with the proposed location method are presented.

  10. Analysis and optimization of fault-tolerant embedded systems with hardened processors

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Polian, Ilia; Pop, Paul

    2009-01-01

    In this paper we propose an approach to the design optimization of fault-tolerant hard real-time embedded systems, which combines hardware and software fault tolerance techniques. We trade-off between selective hardening in hardware and process reexecution in software to provide the required levels...... of fault tolerance against transient faults with the lowest-possible system costs. We propose a system failure probability (SFP) analysis that connects the hardening level with the maximum number of reexecutions in software. We present design optimization heuristics, to select the fault......-tolerant architecture and decide process mapping such that the system cost is minimized, deadlines are satisfied, and the reliability requirements are fulfilled....

  11. PL-MOD: a computer code for modular fault tree analysis and evaluation

    International Nuclear Information System (INIS)

    Olmos, J.; Wolf, L.

    1978-01-01

    The computer code PL-MOD has been developed to implement the modular methodology to fault tree analysis. In the modular approach, fault tree structures are characterized by recursively relating the top tree event to all basic event inputs through a set of equations, each defining an independent modular event for the tree. The advantages of tree modularization lie in that it is a more compact representation than the minimal cut-set description and in that it is well suited for fault tree quantification because of its recursive form. In its present version, PL-MOD modularizes fault trees and evaluates top and intermediate event failure probabilities, as well as basic component and modular event importance measures, in a very efficient way. Thus, its execution time for the modularization and quantification of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using the computer code MOCUS

  12. Fault Detection of Reciprocating Compressors using a Model from Principles Component Analysis of Vibrations

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A D

    2012-01-01

    Traditional vibration monitoring techniques have found it difficult to determine a set of effective diagnostic features due to the high complexity of the vibration signals originating from the many different impact sources and wide ranges of practical operating conditions. In this paper Principal Component Analysis (PCA) is used for selecting vibration feature and detecting different faults in a reciprocating compressor. Vibration datasets were collected from the compressor under baseline condition and five common faults: valve leakage, inter-cooler leakage, suction valve leakage, loose drive belt combined with intercooler leakage and belt loose drive belt combined with suction valve leakage. A model using five PCs has been developed using the baseline data sets and the presence of faults can be detected by comparing the T 2 and Q values from the features of fault vibration signals with corresponding thresholds developed from baseline data. However, the Q -statistic procedure produces a better detection as it can separate the five faults completely.

  13. Parameter Estimation Analysis for Hybrid Adaptive Fault Tolerant Control

    Science.gov (United States)

    Eshak, Peter B.

    Research efforts have increased in recent years toward the development of intelligent fault tolerant control laws, which are capable of helping the pilot to safely maintain aircraft control at post failure conditions. Researchers at West Virginia University (WVU) have been actively involved in the development of fault tolerant adaptive control laws in all three major categories: direct, indirect, and hybrid. The first implemented design to provide adaptation was a direct adaptive controller, which used artificial neural networks to generate augmentation commands in order to reduce the modeling error. Indirect adaptive laws were implemented in another controller, which utilized online PID to estimate and update the controller parameter. Finally, a new controller design was introduced, which integrated both direct and indirect control laws. This controller is known as hybrid adaptive controller. This last control design outperformed the two earlier designs in terms of less NNs effort and better tracking quality. The performance of online PID has an important role in the quality of the hybrid controller; therefore, the quality of the estimation will be of a great importance. Unfortunately, PID is not perfect and the online estimation process has some inherited issues; the online PID estimates are primarily affected by delays and biases. In order to ensure updating reliable estimates to the controller, the estimator consumes some time to converge. Moreover, the estimator will often converge to a biased value. This thesis conducts a sensitivity analysis for the estimation issues, delay and bias, and their effect on the tracking quality. In addition, the performance of the hybrid controller as compared to direct adaptive controller is explored. In order to serve this purpose, a simulation environment in MATLAB/SIMULINK has been created. The simulation environment is customized to provide the user with the flexibility to add different combinations of biases and delays to

  14. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  15. Task Feasibility Analysis and Dynamic Voltage Scaling in Fault-Tolerant Real-Time Embedded Systems

    National Research Council Canada - National Science Library

    Zhang, Ying; Chakrabarty, Krishnendu

    2004-01-01

    .... DVS is then carried out on the basis of the feasibility analysis. We incorporate practical issues such as faults during checkpointing and state restoration, rollback recovery time, memory access time and energy, and DVS overhead...

  16. Fault stress analysis for the Yucca Mountain site characterization project

    International Nuclear Information System (INIS)

    Bauer, S.J.; Hardy, M.P.; Goodrich, R.; Lin, M.

    1992-01-01

    An understanding of the state of stress on faults is important for pre- and post-closure performance considerations for the potential high-level radioactive waste repository at Yucca Mountain. This paper presents the results of three-dimensional numerical analyses that provide estimates of the state of stress through time (10,000 years) along three major faults in the vicinity of the potential repository due to thermal stresses resulting from waste emplacement. it was found, that the safety factor for slip close to the potential repository increases with time after waste emplacement. Possible fault slip is predicted above and below the potential repository for certain loading conditions and times. In general, thermal loading reduces the potential for slip in the vicinity of the potential repository

  17. Fault stress analysis for the Yucca Mountain Site Characterization Project

    International Nuclear Information System (INIS)

    Bauer, S.J.; Hardy, M.P.; Goodrich, R.; Lin, M.

    1991-01-01

    An understanding of the state of stress on faults is important for pre- and postclosure performance considerations for the potential high-level radioactive waste repository at Yucca Mountain. This paper presents the results of three-dimensional numerical analyses that provide estimates of the state of stress through time (10,000 years) along three major faults in the vicinity of the potential repository due to thermal stresses resulting from waste emplacement. It was found, that the safety factor for slip close to the potential repository increases with time after waste emplacement. Possible fault slip is predicted above and below the potential repository for certain loading conditions and times. In general, thermal loading reduces the potential for slip in the vicinity of the potential repository

  18. Improvement of testing and maintenance based on fault tree analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2000-01-01

    Testing and maintenance of safety equipment is an important issue, which significantly contributes to safe and efficient operation of a nuclear power plant. In this paper a method, which extends the classical fault tree with time, is presented. Its mathematical model is represented by a set of equations, which include time requirements defined in the house event matrix. House events matrix is a representation of house events switched on and off through the discrete points of time. It includes house events, which timely switch on and off parts of the fault tree in accordance with the status of the plant configuration. Time dependent top event probability is calculated by the fault tree evaluations. Arrangement of components outages is determined on base of minimization of mean system unavailability. The results show that application of the method may improve the time placement of testing and maintenance activities of safety equipment. (author)

  19. Fault structure analysis by means of large deformation simulator; Daihenkei simulator ni yoru danso kozo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Y.; Shi, B. [Geological Survey of Japan, Tsukuba (Japan); Matsushima, J. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering

    1997-05-27

    Large deformation of the crust is generated by relatively large displacement of the mediums on both sides along a fault. In the conventional finite element method, faults are dealt with by special elements which are called joint elements, but joint elements, elements microscopic in width, generate numerical instability if large shear displacement is given. Therefore, by introducing the master slave (MO) method used for contact analysis in the metal processing field, developed was a large deformation simulator for analyzing diastrophism including large displacement along the fault. Analysis examples were shown in case the upper basement and lower basement were relatively dislocated with the fault as a boundary. The bottom surface and right end boundary of the lower basement are fixed boundaries. The left end boundary of the lower basement is fixed, and to the left end boundary of the upper basement, the horizontal speed, 3{times}10{sup -7}m/s, was given. In accordance with the horizontal movement of the upper basement, the boundary surface largely deformed. Stress is almost at right angles at the boundary surface. As to the analysis of faults by the MO method, it has been used for a single simple fault, but should be spread to lots of faults in the future. 13 refs., 2 figs.

  20. Reliability analysis of the solar array based on Fault Tree Analysis

    International Nuclear Information System (INIS)

    Wu Jianing; Yan Shaoze

    2011-01-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  1. Reliability analysis of the solar array based on Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wu Jianing; Yan Shaoze, E-mail: yansz@mail.tsinghua.edu.cn [State Key Laboratory of Tribology, Department of Precision Instruments and Mechanology, Tsinghua University,Beijing 100084 (China)

    2011-07-19

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  2. NuFTA: A CASE Tool for Automatic Software Fault Tree Analysis

    International Nuclear Information System (INIS)

    Yun, Sang Hyun; Lee, Dong Ah; Yoo, Jun Beom

    2010-01-01

    Software fault tree analysis (SFTA) is widely used for analyzing software requiring high-reliability. In SFTA, experts predict failures of system through HA-ZOP (Hazard and Operability study) or FMEA (Failure Mode and Effects Analysis) and draw software fault trees about the failures. Quality and cost of the software fault tree, therefore, depend on knowledge and experience of the experts. This paper proposes a CASE tool NuFTA in order to assist experts of safety analysis. The NuFTA automatically generate software fault trees from NuSCR formal requirements specification. NuSCR is a formal specification language used for specifying software requirements of KNICS RPS (Reactor Protection System) in Korea. We used the SFTA templates proposed by in order to generate SFTA automatically. The NuFTA also generates logical formulae summarizing the failure's cause, and we have a plan to use the formulae usefully through formal verification techniques

  3. Probabilistic Seismic Hazard Analysis of Victoria, British Columbia, Canada: Considering an Active Leech River Fault

    Science.gov (United States)

    Kukovica, J.; Molnar, S.; Ghofrani, H.

    2017-12-01

    The Leech River fault is situated on Vancouver Island near the city of Victoria, British Columbia, Canada. The 60km transpressional reverse fault zone runs east to west along the southern tip of Vancouver Island, dividing the lithologic units of Jurassic-Cretaceous Leech River Complex schists to the north and Eocene Metchosin Formation basalts to the south. This fault system poses a considerable hazard due to its proximity to Victoria and 3 major hydroelectric dams. The Canadian seismic hazard model for the 2015 National Building Code of Canada (NBCC) considered the fault system to be inactive. However, recent paleoseismic evidence suggests there to be at least 2 surface-rupturing events to have exceeded a moment magnitude (M) of 6.5 within the last 15,000 years (Morell et al. 2017). We perform a Probabilistic Seismic Hazard Analysis (PSHA) for the city of Victoria with consideration of the Leech River fault as an active source. A PSHA for Victoria which replicates the 2015 NBCC estimates is accomplished to calibrate our PSHA procedure. The same seismic source zones, magnitude recurrence parameters, and Ground Motion Prediction Equations (GMPEs) are used. We replicate the uniform hazard spectrum for a probability of exceedance of 2% in 50 years for a 500 km radial area around Victoria. An active Leech River fault zone is then added; known length and dip. We are determining magnitude recurrence parameters based on a Gutenberg-Richter relationship for the Leech River fault from various catalogues of the recorded seismicity (M 2-3) within the fault's vicinity and the proposed paleoseismic events. We seek to understand whether inclusion of an active Leech River fault source will significantly increase the probabilistic seismic hazard for Victoria. Morell et al. 2017. Quaternary rupture of a crustal fault beneath Victoria, British Columbia, Canada. GSA Today, 27, doi: 10.1130/GSATG291A.1

  4. Singular limit analysis of a model for earthquake faulting

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...

  5. Fuzzy set theoretic approach to fault tree analysis | Tyagi ...

    African Journals Online (AJOL)

    This approach can be widely used to improve the reliability and to reduce the operating cost of a system. The proposed techniques are discussed and illustrated by taking an example of a nuclear power plant. Keywords: Fault tree, Triangular and Trapezoidal fuzzy number, Fuzzy importance, Ranking of fuzzy numbers ...

  6. Quantitative security and safety analysis with attack-fault trees

    NARCIS (Netherlands)

    Kumar, Rajesh; Stoelinga, Mariëlle Ida Antoinette

    2017-01-01

    Cyber physical systems, like power plants, medical devices and data centers have to meet high standards, both in terms of safety (i.e. absence of unintentional failures) and security (i.e. no disruptions due to malicious attacks). This paper presents attack fault trees (AFTs), a formalism that

  7. Microprocessor event analysis in parallel with Camac data acquisition

    International Nuclear Information System (INIS)

    Cords, D.; Eichler, R.; Riege, H.

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a Camac System (GEC-ELLIOTT System Crate) and shares the Camac access with a Nord-1OS computer. Interfaces have been designed and tested for execution of Camac cycles, communication with the Nord-1OS computer and DMA-transfer from Camac to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-1OS computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the result of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-1OS buffer will be reset and the event omitted from further processing. (orig.)

  8. Microprocessor event analysis in parallel with CAMAC data acquisition

    CERN Document Server

    Cords, D; Riege, H

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  9. A comparison between fault tree analysis and reliability graph with general gates

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun; Jung, Woo Sik

    2004-01-01

    Currently, level-1 probabilistic safety assessment (PSA) is performed on the basis of event tree analysis and fault tree analysis. Kim and Seong developed a new method for system reliability analysis named reliability graph with general gates (RGGG). The RGGG is an extension of conventional reliability graph, and it utilizes the transformation of system structures to equivalent Bayesian networks for quantitative calculation. The RGGG is considered to be intuitive and easy-to-use while as powerful as fault tree analysis. As an example, Kim and Seong already showed that the Bayesian network model for digital plant protection system (DPPS), which is transformed from the RGGG model for DPPS, can be shown in 1 page, while the fault tree model for DPPS consists of 64 pages of fault trees. Kim and Seong also insisted that Bayesian network model for DPPS is more intuitive because the one-to-one matching between each node in the Bayesian network model and an actual component of DPPS is possible. In this paper, we are going to give a comparison between fault tree analysis and the RGGG method with two example systems. The two example systems are the recirculation of in Korean standard nuclear power plants (KSNP) and the fault tree model developed by Rauzy

  10. Fault Tree Analysis with Temporal Gates and Model Checking Technique for Qualitative System Safety Analysis

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2010-01-01

    Fault tree analysis (FTA) has suffered from several drawbacks such that it uses only static gates and hence can not capture dynamic behaviors of the complex system precisely, and it is in lack of rigorous semantics, and reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and time-consuming for the complex systems while it has been one of the most widely used safety analysis technique in nuclear industry. Although several attempts have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA

  11. Fault diagnosis of main coolant pump in the nuclear power station based on the principal component analysis

    International Nuclear Information System (INIS)

    Feng Junting; Xu Mi; Wang Guizeng

    2003-01-01

    The fault diagnosis method based on principal component analysis is studied. The fault character direction storeroom of fifteen parameters abnormity is built in the simulation for the main coolant pump of nuclear power station. The measuring data are analyzed, and the results show that it is feasible for the fault diagnosis system of main coolant pump in the nuclear power station

  12. Spectral negentropy based sidebands and demodulation analysis for planet bearing fault diagnosis

    Science.gov (United States)

    Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.

    2017-12-01

    Planet bearing vibration signals are highly complex due to intricate kinematics (involving both revolution and spinning) and strong multiple modulations (including not only the fault induced amplitude modulation and frequency modulation, but also additional amplitude modulations due to load zone passing, time-varying vibration transfer path, and time-varying angle between the gear pair mesh lines of action and fault impact force vector), leading to difficulty in fault feature extraction. Rolling element bearing fault diagnosis essentially relies on detection of fault induced repetitive impulses carried by resonance vibration, but they are usually contaminated by noise and therefor are hard to be detected. This further adds complexity to planet bearing diagnostics. Spectral negentropy is able to reveal the frequency distribution of repetitive transients, thus providing an approach to identify the optimal frequency band of a filter for separating repetitive impulses. In this paper, we find the informative frequency band (including the center frequency and bandwidth) of bearing fault induced repetitive impulses using the spectral negentropy based infogram. In Fourier spectrum, we identify planet bearing faults according to sideband characteristics around the center frequency. For demodulation analysis, we filter out the sensitive component based on the informative frequency band revealed by the infogram. In amplitude demodulated spectrum (squared envelope spectrum) of the sensitive component, we diagnose planet bearing faults by matching the present peaks with the theoretical fault characteristic frequencies. We further decompose the sensitive component into mono-component intrinsic mode functions (IMFs) to estimate their instantaneous frequencies, and select a sensitive IMF with an instantaneous frequency fluctuating around the center frequency for frequency demodulation analysis. In the frequency demodulated spectrum (Fourier spectrum of instantaneous frequency) of

  13. Parallel Wavefront Analysis for a 4D Interferometer

    Science.gov (United States)

    Rao, Shanti R.

    2011-01-01

    This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.

  14. Diagnosis and Early Warning of Wind Turbine Faults Based on Cluster Analysis Theory and Modified ANFIS

    Directory of Open Access Journals (Sweden)

    Quan Zhou

    2017-07-01

    Full Text Available The construction of large-scale wind farms results in a dramatic increase of wind turbine (WT faults. The failure mode is also becoming increasingly complex. This study proposes a new model for early warning and diagnosis of WT faults to solve the problem of Supervisory Control And Data Acquisition (SCADA systems, given that the traditional threshold method cannot provide timely warning. First, the characteristic quantity of fault early warning and diagnosis analyzed by clustering analysis can obtain in advance abnormal data in the normal threshold range by considering the effects of wind speed. Based on domain knowledge, Adaptive Neuro-fuzzy Inference System (ANFIS is then modified to establish the fault early warning and diagnosis model. This approach improves the accuracy of the model under the condition of absent and sparse training data. Case analysis shows that the effect of the early warning and diagnosis model in this study is better than that of the traditional threshold method.

  15. Reactive Transport Analysis of Fault 'Self-sealing' Associated with CO2 Storage

    Science.gov (United States)

    Patil, V.; McPherson, B. J. O. L.; Priewisch, A.; Franz, R. J.

    2014-12-01

    We present an extensive hydrologic and reactive transport analysis of the Little Grand Wash fault zone (LGWF), a natural analog of fault-associated leakage from an engineered CO2 repository. Injecting anthropogenic CO2 into the subsurface is suggested for climate change mitigation. However, leakage of CO2 from its target storage formation into unintended areas is considered as a major risk involved in CO2 sequestration. In the event of leakage, permeability in leakage pathways like faults may get sealed (reduced) due to precipitation or enhanced (increased) due to dissolution reactions induced by CO2-enriched water, thus influencing migration and fate of the CO2. We hypothesize that faults which act as leakage pathways can seal over time in presence of CO2-enriched waters. An example of such a fault 'self-sealing' is found in the LGWF near Green River, Utah in the Paradox basin, where fault outcrop shows surface and sub-surface fractures filled with calcium carbonate (CaCO3). The LGWF cuts through multiple reservoirs and seal layers piercing a reservoir of naturally occurring CO2, allowing it to leak into overlying aquifers. As the CO2-charged water from shallower aquifers migrates towards atmosphere, a decrease in pCO2 leads to supersaturation of water with respect to CaCO3, which precipitates in the fractures of the fault damage zone. In order to test the nature, extent and time-frame of the fault sealing, we developed reactive flow simulations of the LGWF. Model parameters were chosen based on hydrologic measurements from literature. Model geochemistry was constrained by water analysis of the adjacent Crystal Geyser and observations from a scientific drilling test conducted at the site. Precipitation of calcite in the top portion of the fault model led to a decrease in the porosity value of the damage zone, while clay precipitation led to a decrease in the porosity value of the fault core. We found that the results were sensitive to the fault architecture

  16. A Parallel Software Pipeline for DMET Microarray Genotyping Data Analysis

    Directory of Open Access Journals (Sweden)

    Giuseppe Agapito

    2018-06-01

    Full Text Available Personalized medicine is an aspect of the P4 medicine (predictive, preventive, personalized and participatory based precisely on the customization of all medical characters of each subject. In personalized medicine, the development of medical treatments and drugs is tailored to the individual characteristics and needs of each subject, according to the study of diseases at different scales from genotype to phenotype scale. To make concrete the goal of personalized medicine, it is necessary to employ high-throughput methodologies such as Next Generation Sequencing (NGS, Genome-Wide Association Studies (GWAS, Mass Spectrometry or Microarrays, that are able to investigate a single disease from a broader perspective. A side effect of high-throughput methodologies is the massive amount of data produced for each single experiment, that poses several challenges (e.g., high execution time and required memory to bioinformatic software. Thus a main requirement of modern bioinformatic softwares, is the use of good software engineering methods and efficient programming techniques, able to face those challenges, that include the use of parallel programming and efficient and compact data structures. This paper presents the design and the experimentation of a comprehensive software pipeline, named microPipe, for the preprocessing, annotation and analysis of microarray-based Single Nucleotide Polymorphism (SNP genotyping data. A use case in pharmacogenomics is presented. The main advantages of using microPipe are: the reduction of errors that may happen when trying to make data compatible among different tools; the possibility to analyze in parallel huge datasets; the easy annotation and integration of data. microPipe is available under Creative Commons license, and is freely downloadable for academic and not-for-profit institutions.

  17. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    Directory of Open Access Journals (Sweden)

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  18. Systems analysis approach to probabilistic modeling of fault trees

    International Nuclear Information System (INIS)

    Bartholomew, R.J.; Qualls, C.R.

    1985-01-01

    A method of probabilistic modeling of fault tree logic combined with stochastic process theory (Markov modeling) has been developed. Systems are then quantitatively analyzed probabilistically in terms of their failure mechanisms including common cause/common mode effects and time dependent failure and/or repair rate effects that include synergistic and propagational mechanisms. The modeling procedure results in a state vector set of first order, linear, inhomogeneous, differential equations describing the time dependent probabilities of failure described by the fault tree. The solutions of this Failure Mode State Variable (FMSV) model are cumulative probability distribution functions of the system. A method of appropriate synthesis of subsystems to form larger systems is developed and applied to practical nuclear power safety systems

  19. FADES: A tool for automated fault analysis of complex systems

    International Nuclear Information System (INIS)

    Wood, C.

    1990-01-01

    FADES is an Expert System for performing fault analyses on complex connected systems. By using a graphical editor to draw components and link them together the FADES system allows the analyst to describe a given system. The knowledge base created is used to qualitatively simulate the system behaviour. By inducing all possible component failures in the system and determining their effects, a set of facts is built up. These facts are then used to create Fault Trees, or FMEA tables. The facts may also be used for explanation effects and to generate diagnostic rules allowing system instrumentation to be optimised. The prototype system has been built and tested and is preently undergoing testing by users. All comments from these trials will be used to tailor the system to the requirements of the user so that the end product performs the exact task required

  20. Machinery fault diagnosis using joint global and local/nonlocal discriminant analysis with selective ensemble learning

    Science.gov (United States)

    Yu, Jianbo

    2016-11-01

    The vibration signals of faulty machine are generally non-stationary and nonlinear under those complicated working conditions. Thus, it is a big challenge to extract and select the effective features from vibration signals for machinery fault diagnosis. This paper proposes a new manifold learning algorithm, joint global and local/nonlocal discriminant analysis (GLNDA), which aims to extract effective intrinsic geometrical information from the given vibration data. Comparisons with other regular methods, principal component analysis (PCA), local preserving projection (LPP), linear discriminant analysis (LDA) and local LDA (LLDA), illustrate the superiority of GLNDA in machinery fault diagnosis. Based on the extracted information by GLNDA, a GLNDA-based Fisher discriminant rule (FDR) is put forward and applied to machinery fault diagnosis without additional recognizer construction procedure. By importing Bagging into GLNDA score-based feature selection and FDR, a novel manifold ensemble method (selective GLNDA ensemble, SE-GLNDA) is investigated for machinery fault diagnosis. The motivation for developing ensemble of manifold learning components is that it can achieve higher accuracy and applicability than single component in machinery fault diagnosis. The effectiveness of the SE-GLNDA-based fault diagnosis method has been verified by experimental results from bearing full life testers.

  1. Fault diagnosis in rotating machinery by vibration analysis

    International Nuclear Information System (INIS)

    Behzad, M.; Asayesh, M.

    2002-01-01

    Dynamic behavior of unbalanced bent shaft has been investigated in this research. Finite element method is used for unbalance response calculation of a bent shaft. The result shows the effect of bent on the unbalance response. The angle between bent vector and unbalance force, position and type of supports, shaft diameter and disk position can affect the outcome. The results of this research can significantly help in fault diagnosis in rotating machinery

  2. Vibration Feature Extraction and Analysis for Fault Diagnosis of Rotating Machinery-A Literature Survey

    Directory of Open Access Journals (Sweden)

    Saleem Riaz

    2017-02-01

    Full Text Available Safety, reliability, efficiency and performance of rotating machinery in all industrial applications are the main concerns. Rotating machines are widely used in various industrial applications. Condition monitoring and fault diagnosis of rotating machinery faults are very important and often complex and labor-intensive. Feature extraction techniques play a vital role for a reliable, effective and efficient feature extraction for the diagnosis of rotating machinery. Therefore, developing effective bearing fault diagnostic method using different fault features at different steps becomes more attractive. Bearings are widely used in medical applications, food processing industries, semi-conductor industries, paper making industries and aircraft components. This paper review has demonstrated that the latest reviews applied to rotating machinery on the available a variety of vibration feature extraction. Generally literature is classified into two main groups: frequency domain, time frequency analysis. However, fault detection and diagnosis of rotating machine vibration signal processing methods to present their own limitations. In practice, most healthy ingredients faulty vibration signal from background noise and mechanical vibration signals are buried. This paper also reviews that how the advanced signal processing methods, empirical mode decomposition and interference cancellation algorithm has been investigated and developed. The condition for rotating machines based rehabilitation, prevent failures increase the availability and reduce the cost of maintenance is becoming necessary too. Rotating machine fault detection and diagnostics in developing algorithms signal processing based on a key problem is the fault feature extraction or quantification. Currently, vibration signal, fault detection and diagnosis of rotating machinery based techniques most widely used techniques. Furthermore, the researchers are widely interested to make automatic

  3. Analysis and Design of High-Order Parallel Resonant Converters

    Science.gov (United States)

    Batarseh, Issa Eid

    1990-01-01

    In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.

  4. Improvements in longwall downtime analysis and fault identification

    Energy Technology Data Exchange (ETDEWEB)

    Daniel Bongers [CRCMining (Australia)

    2006-12-15

    In this project we have developed a computer program for recording detailed information relating to face equipment downtime in longwall mining operations. This software is intended to replace the current manual recording of delay information, which has been proven to be inaccurate. The software developed is intended to be operated from the maingate computer. Users are provided with a simple user interface requesting that nature of each delay in production, which is time-stamped in alignment with the SCADA system, removing the need for operators to estimate the start time and duration of each delay. Each instance of non-production is recorded to a database, which may be accessed by surface computers, removing the need for transcribing of the deputy's report into the delay database. An additional suggestive element has been developed, based on sophisticated fault detection technology, which reduces the data input required by operators, and provides a basis for the implementation of real-time fault detection. Both the basic recording software and the suggestive element offer improvements in efficiency and accuracy to longwall operations. More accurate data allows improved maintenance planning and improved measures of operational KPIs. The suggestive element offers the potential for rapid fault diagnosis, and potentially delay forecasting, which may be used to reduce lost time associated with machine downtime.

  5. A fast BDD algorithm for large coherent fault trees analysis

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jaejoo

    2004-01-01

    Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If-Then-Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work. This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm

  6. A morphogram with the optimal selection of parameters used in morphological analysis for enhancing the ability in bearing fault diagnosis

    International Nuclear Information System (INIS)

    Wang, Dong; Tse, Peter W; Tse, Yiu L

    2012-01-01

    Morphological analysis is a signal processing method that extracts the local morphological features of a signal by intersecting it with a structuring element (SE). When a bearing suffers from a localized fault, an impulse-type cyclic signal is generated. The amplitude and the cyclic time interval of impacts could reflect the health status of the inspected bearing and the cause of defects, respectively. In this paper, an enhanced morphological analysis called ‘morphogram’ is presented for extracting the cyclic impacts caused by a certain bearing fault. Based on the theory of morphology, the morphogram is realized by simple mathematical operators, including Minkowski addition and subtraction. The morphogram is able to detect all possible fault intervals. The most likely fault-interval-based construction index (CI) is maximized to establish the optimal range of the flat SE for the extraction of bearing fault cyclic features so that the type and cause of bearing faults can be easily determined in a time domain. The morphogram has been validated by simulated bearing fault signals, real bearing faulty signals collected from a laboratorial rotary machine and an industrial bearing fault signal. The results show that the morphogram is able to detect all possible bearing fault intervals. Based on the most likely bearing fault interval shown on the morphogram, the CI is effective in determining the optimal parameters of the flat SE for the extraction of bearing fault cyclic features for bearing fault diagnosis. (paper)

  7. A parallel implementation of 3D Zernike moment analysis

    Science.gov (United States)

    Berjón, Daniel; Arnaldo, Sergio; Morán, Francisco

    2011-01-01

    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system.

  8. FTA, Fault Tree Analysis for Minimal Cut Sets, Graphics for CALCOMP

    International Nuclear Information System (INIS)

    Van Slyke, W.J.; Griffing, D.E.; Diven, J.

    1978-01-01

    1 - Description of problem or function: The FTA (Fault Tree Analysis) system was designed to predict probabilities of the modes of failure for complex systems and to graphically present the structure of systems. There are three programs in the system. Program ALLCUTS performs the calculations. Program KILMER constructs a CalComp plot file of the system fault tree. Program BRANCH builds a cross-reference list of the system fault tree. 2 - Method of solution: ALLCUTS employs a top-down set expansion algorithm to find fault tree cut-sets and then optionally calculates their probability using a currently accepted cut-set quantification method. The methodology is adapted from that in WASH-1400 (draft), August 1974. 3 - Restrictions on the complexity of the problem: Maxima of: 175 basic events, 425 rate events. ALLCUTS may be expanded to solve larger problems depending on available core memory

  9. Torsional vibration analysis in turbo-generator shaft due to mal-synchronization fault

    Science.gov (United States)

    Bangunde, Abhishek; Kumar, Tarun; Kumar, Rajeev; Jain, S. C.

    2018-03-01

    A rotor of turbo-generator shafting is many times subjected to torsional vibrations during its lifespan. The reasons behind these vibrations are three-Phase fault, two-phase fault, line to ground fault, faulty-mal synchronization etc. Sometimes these vibrations can cause complete failure of turbo-generator shafting system. To calculate moment variation during these faults on the shafting system vibration analysis is done using Finite Elements Methods to calculate mass and stiffness matrix. The electrical disturbance caused during Mal-synchronization is put on generator section, and corresponding second order equations are solved by using “Duhamel Integral”. From the moment variation plots at four sections critically loaded sections are identified.

  10. Progress in Root Cause and Fault Propagation Analysis of Large-Scale Industrial Processes

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2012-01-01

    Full Text Available In large-scale industrial processes, a fault can easily propagate between process units due to the interconnections of material and information flows. Thus the problem of fault detection and isolation for these processes is more concerned about the root cause and fault propagation before applying quantitative methods in local models. Process topology and causality, as the key features of the process description, need to be captured from process knowledge and process data. The modelling methods from these two aspects are overviewed in this paper. From process knowledge, structural equation modelling, various causal graphs, rule-based models, and ontological models are summarized. From process data, cross-correlation analysis, Granger causality and its extensions, frequency domain methods, information-theoretical methods, and Bayesian nets are introduced. Based on these models, inference methods are discussed to find root causes and fault propagation paths under abnormal situations. Some future work is proposed in the end.

  11. Optimum IMFs Selection Based Envelope Analysis of Bearing Fault Diagnosis in Plunger Pump

    Directory of Open Access Journals (Sweden)

    Wenliao Du

    2016-01-01

    Full Text Available As the plunger pump always works in a complicated environment and the hydraulic cycle has an intrinsic fluid-structure interaction character, the fault information is submerged in the noise and the disturbance impact signals. For the fault diagnosis of the bearings in plunger pump, an optimum intrinsic mode functions (IMFs selection based envelope analysis was proposed. Firstly, the Wigner-Ville distribution was calculated for the acquired vibration signals, and the resonance frequency brought on by fault was obtained. Secondly, the empirical mode decomposition (EMD was employed for the vibration signal, and the optimum IMFs and the filter bandwidth were selected according to the Wigner-Ville distribution. Finally, the envelope analysis was utilized for the selected IMFs filtered by the band pass filter, and the fault type was recognized by compared with the bearing character frequencies. For the two modes, inner race fault and compound fault in the inner race and roller of rolling element bearing in plunger pump, the experiments show that a promising result is achieved.

  12. Fault tree and failure mode and effects analysis of a digital safety function

    International Nuclear Information System (INIS)

    Maskuniitty, M.; Pulkkinen, U.

    1995-01-01

    The principles of fault tree and failure mode and effects analysis (FMEA) for the analysis of digital safety functions of nuclear power plants are discussed. Based on experiences from a case study, a proposal for a full scale analysis is presented. The feasibility and applicability the above mentioned reliability engineering methods are discussed. (author). 13 refs, 1 fig., 2 tabs

  13. Identification of active fault using analysis of derivatives with vertical second based on gravity anomaly data (Case study: Seulimeum fault in Sumatera fault system)

    Science.gov (United States)

    Hududillah, Teuku Hafid; Simanjuntak, Andrean V. H.; Husni, Muhammad

    2017-07-01

    Gravity is a non-destructive geophysical technique that has numerous application in engineering and environmental field like locating a fault zone. The purpose of this study is to spot the Seulimeum fault system in Iejue, Aceh Besar (Indonesia) by using a gravity technique and correlate the result with geologic map and conjointly to grasp a trend pattern of fault system. An estimation of subsurface geological structure of Seulimeum fault has been done by using gravity field anomaly data. Gravity anomaly data which used in this study is from Topex that is processed up to Free Air Correction. The step in the Next data processing is applying Bouger correction and Terrin Correction to obtain complete Bouger anomaly that is topographically dependent. Subsurface modeling is done using the Gav2DC for windows software. The result showed a low residual gravity value at a north half compared to south a part of study space that indicated a pattern of fault zone. Gravity residual was successfully correlate with the geologic map that show the existence of the Seulimeum fault in this study space. The study of earthquake records can be used for differentiating the active and non active fault elements, this gives an indication that the delineated fault elements are active.

  14. High resolution t-LiDAR scanning of an active bedrock fault scarp for palaeostress analysis

    Science.gov (United States)

    Reicherter, Klaus; Wiatr, Thomas; Papanikolaou, Ioannis; Fernández-Steeger, Tomas

    2013-04-01

    Palaeostress analysis of an active bedrock normal fault scarp based on kinematic indicators is carried applying terrestrial laser scanning (t-LiDAR or TLS). For this purpose three key elements are necessary for a defined region on the fault plane: (i) the orientation of the fault plane, (ii) the orientation of the slickenside lineation or other kinematic indicators and (iii) the sense of motion of the hanging wall. We present a workflow to obtain palaeostress data from point cloud data using terrestrial laser scanning. The entire case-study was performed on a continuous limestone bedrock normal fault scarp on the island of Crete, Greece, at four different locations along the WNW-ESE striking Spili fault. At each location we collected data with a mobile terrestrial light detection and ranging system and validated the calculated three-dimensional palaeostress results by comparison with the conventional palaeostress method with compass at three of the locations. Numerous kinematics indicators for normal faulting were discovered on the fault plane surface using t-LiDAR data and traditional methods, like Riedel shears, extensional break-outs, polished corrugations and many more. However, the kinematic indicators are more or less unidirectional and almost pure dip-slip. No oblique reactivations have been observed. But, towards the tips of the fault, inclination of the striation tends to point towards the centre of the fault. When comparing all reconstructed palaeostress data obtained from t-LiDAR to that obtained through manual compass measurements, the degree of fault plane orientation divergence is around ±005/03 for dip direction and dip. The degree of slickenside lineation variation is around ±003/03 for dip direction and dip. Therefore, the percentage threshold error of the individual vector angle at the different investigation site is lower than 3 % for the dip direction and dip for planes, and lower than 6 % for strike. The maximum mean variation of the complete

  15. Earthquake-induced crustal deformation and consequences for fault displacement hazard analysis of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Gürpinar, Aybars, E-mail: aybarsgurpinar2007@yahoo.com [Nuclear & Risk Consultancy, Anisgasse 4, 1221 Vienna (Austria); Serva, Leonello, E-mail: lserva@alice.it [Independent Consultant, Via dei Dauni 1, 00185 Rome (Italy); Livio, Franz, E-mail: franz.livio@uninsubria.it [Dipartimento di Scienza ed Alta Tecnologia, Università degli Studi dell’Insubria, Via Velleggio, 11, 22100 Como (Italy); Rizzo, Paul C., E-mail: paul.rizzo@rizzoasoc.com [RIZZO Associates, 500 Penn Center Blvd., Suite 100, Pittsburgh, PA 15235 (United States)

    2017-01-15

    Highlights: • A three-step procedure to incorporate coseismic deformation into PFDHA. • Increased scrutiny for faults in the area permanently deformed by future strong earthquakes. • These faults share with the primary structure the same time window for fault capability. • VGM variation may occur due to tectonism that has caused co-seismic deformation. - Abstract: Readily available interferometric data (InSAR) of the coseismic deformation field caused by recent seismic events clearly show that major earthquakes produce crustal deformation over wide areas, possibly resulting in significant stress loading/unloading of the crust. Such stress must be considered in the evaluation of seismic hazards of nuclear power plants (NPP) and, in particular, for the potential of surface slip (i.e., probabilistic fault displacement hazard analysis - PFDHA) on both primary and distributed faults. In this study, based on the assumption that slip on pre-existing structures can represent the elastic response of compliant fault zones to the permanent co-seismic stress changes induced by other major seismogenic structures, we propose a three-step procedure to address fault displacement issues and consider possible influence of surface faulting/deformation on vibratory ground motion (VGM). This approach includes: (a) data on the presence and characteristics of capable faults, (b) data on recognized and/or modeled co-seismic deformation fields and, where possible, (c) static stress transfer between source and receiving faults of unknown capability. The initial step involves the recognition of the major seismogenic structures nearest to the site and their characterization in terms of maximum expected earthquake and the time frame to be considered for determining their “capability” (as defined in the International Atomic Energy Agency - IAEA Specific Safety Guide SSG-9). Then a GIS-based buffer approach is applied to identify all the faults near the NPP, possibly influenced by

  16. Earthquake-induced crustal deformation and consequences for fault displacement hazard analysis of nuclear power plants

    International Nuclear Information System (INIS)

    Gürpinar, Aybars; Serva, Leonello; Livio, Franz; Rizzo, Paul C.

    2017-01-01

    Highlights: • A three-step procedure to incorporate coseismic deformation into PFDHA. • Increased scrutiny for faults in the area permanently deformed by future strong earthquakes. • These faults share with the primary structure the same time window for fault capability. • VGM variation may occur due to tectonism that has caused co-seismic deformation. - Abstract: Readily available interferometric data (InSAR) of the coseismic deformation field caused by recent seismic events clearly show that major earthquakes produce crustal deformation over wide areas, possibly resulting in significant stress loading/unloading of the crust. Such stress must be considered in the evaluation of seismic hazards of nuclear power plants (NPP) and, in particular, for the potential of surface slip (i.e., probabilistic fault displacement hazard analysis - PFDHA) on both primary and distributed faults. In this study, based on the assumption that slip on pre-existing structures can represent the elastic response of compliant fault zones to the permanent co-seismic stress changes induced by other major seismogenic structures, we propose a three-step procedure to address fault displacement issues and consider possible influence of surface faulting/deformation on vibratory ground motion (VGM). This approach includes: (a) data on the presence and characteristics of capable faults, (b) data on recognized and/or modeled co-seismic deformation fields and, where possible, (c) static stress transfer between source and receiving faults of unknown capability. The initial step involves the recognition of the major seismogenic structures nearest to the site and their characterization in terms of maximum expected earthquake and the time frame to be considered for determining their “capability” (as defined in the International Atomic Energy Agency - IAEA Specific Safety Guide SSG-9). Then a GIS-based buffer approach is applied to identify all the faults near the NPP, possibly influenced by

  17. Users' manual for fault tree analysis code: CUT-TD

    International Nuclear Information System (INIS)

    Watanabe, Norio; Kiyota, Mikio.

    1992-06-01

    The CUT-TD code has been developed to find minimal cut sets for a given fault tree and to calculate the occurrence probability of its top event. This code uses an improved top-down algorithm which can enhance the efficiency in deriving minimal cut sets. The features in processing techniques incorporated into CUT-TD are as follows: (1) Consecutive OR gates or consecutive AND gates can be coalesced into a single gate. As a result, this processing directly produces cut sets for the redefined single gate with each gate not being developed. (2) The independent subtrees are automatically identified and their respective cut sets are separately found to enhance the efficiency in processing. (3) The minimal cut sets can be obtained for the top event of a fault tree by combining their respective minimal cut sets for several gates of the fault tree. (4) The user can reduce the computing time for finding minimal cut sets and control the size and significance of cut sets by inputting a minimum probability cut off and/or a maximum order cut off. (5) The user can select events that need not to be further developed in the process of obtaining minimal cut sets. This option can reduce the number of minimal cut sets, save the computing time and assists the user in reviewing the result. (6) Computing time is monitored by the CUT-TD code so that it can prevent the running job from abnormally ending due to excessive CPU time and produce an intermediate result. The CUT-TD code has the ability to restart the calculation with use of the intermediate result. This report provides a users' manual for the CUT-TD code. (author)

  18. Application of Fault Tree Analysis for Estimating Temperature Alarm Circuit Reliability

    International Nuclear Information System (INIS)

    El-Shanshoury, A.I.; El-Shanshoury, G.I.

    2011-01-01

    Fault Tree Analysis (FTA) is one of the most widely-used methods in system reliability analysis. It is a graphical technique that provides a systematic description of the combinations of possible occurrences in a system, which can result in an undesirable outcome. The presented paper deals with the application of FTA method in analyzing temperature alarm circuit. The criticality failure of this circuit comes from failing to alarm when temperature exceeds a certain limit. In order for a circuit to be safe, a detailed analysis of the faults causing circuit failure is performed by configuring fault tree diagram (qualitative analysis). Calculations of circuit quantitative reliability parameters such as Failure Rate (FR) and Mean Time between Failures (MTBF) are also done by using Relex 2009 computer program. Benefits of FTA are assessing system reliability or safety during operation, improving understanding of the system, and identifying root causes of equipment failures

  19. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  20. Fault trees based on past accidents. Factorial analysis of events

    International Nuclear Information System (INIS)

    Vaillant, M.

    1977-01-01

    The method of the fault tree is already useful in the qualitative step before any reliability calculation. The construction of the tree becomes even simpler when we just want to describe how the events happened. Differently from screenplays that introduce several possibilities by means of the conjunction OR, you only have here the conjunction AND, which will not be written at all. This method is presented by INRS (1) for the study of industrial injuries; it may also be applied to material damages. (orig.) [de

  1. Techniques for Fault Detection and Visualization of Telemetry Dependence Relationships for Root Cause Fault Analysis in Complex Systems

    Science.gov (United States)

    Guy, Nathaniel

    This thesis explores new ways of looking at telemetry data, from a time-correlative perspective, in order to see patterns within the data that may suggest root causes of system faults. It was thought initially that visualizing an animated Pearson Correlation Coefficient (PCC) matrix for telemetry channels would be sufficient to give new understanding; however, testing showed that the high dimensionality and inability to easily look at change over time in this approach impeded understanding. Different correlative techniques, combined with the time curve visualization proposed by Bach et al (2015), were adapted to visualize both raw telemetry and telemetry data correlations. Review revealed that these new techniques give insights into the data, and an intuitive grasp of data families, which show the effectiveness of this approach for enhancing system understanding and assisting with root cause analysis for complex aerospace systems.

  2. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    Science.gov (United States)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  3. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    Science.gov (United States)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  4. Similarity ratio analysis for early stage fault detection with optical emission spectrometer in plasma etching process.

    Directory of Open Access Journals (Sweden)

    Jie Yang

    Full Text Available A Similarity Ratio Analysis (SRA method is proposed for early-stage Fault Detection (FD in plasma etching processes using real-time Optical Emission Spectrometer (OES data as input. The SRA method can help to realise a highly precise control system by detecting abnormal etch-rate faults in real-time during an etching process. The method processes spectrum scans at successive time points and uses a windowing mechanism over the time series to alleviate problems with timing uncertainties due to process shift from one process run to another. A SRA library is first built to capture features of a healthy etching process. By comparing with the SRA library, a Similarity Ratio (SR statistic is then calculated for each spectrum scan as the monitored process progresses. A fault detection mechanism, named 3-Warning-1-Alarm (3W1A, takes the SR values as inputs and triggers a system alarm when certain conditions are satisfied. This design reduces the chance of false alarm, and provides a reliable fault reporting service. The SRA method is demonstrated on a real semiconductor manufacturing dataset. The effectiveness of SRA-based fault detection is evaluated using a time-series SR test and also using a post-process SR test. The time-series SR provides an early-stage fault detection service, so less energy and materials will be wasted by faulty processing. The post-process SR provides a fault detection service with higher reliability than the time-series SR, but with fault testing conducted only after each process run completes.

  5. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  6. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  7. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  8. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    Science.gov (United States)

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  9. ROCK FRACTURES NEAR FAULTS: SPECIFIC FEATURES OF STRUCTURAL‐PARAGENETIC ANALYSIS

    Directory of Open Access Journals (Sweden)

    Yu. P. Burzunova

    2017-01-01

    Full Text Available The new approach to structural‐paragenetic analysis of near‐fault fractures [Seminsky, 2014, 2015] and specific features of its application are discussed. This approach was tested in studies of fracturing in West Pribaikalie and Central Mongolia. We give some recommendations concerning collection, selection and initial processing of the data on fractures and faults. The analysis technique is briefly described, and its distinctive details are specified. Under the new approach, we compare systems of natural fractures with the standard joint sets. By analysing the mass measurements of the orientations of joint sets in a fault zone, it becomes possible to reveal the characteristics of this fault zone, such as its structure, morphogenetic type, etc. The comparative analysis is based on the identification of the main fracture paragenesis near the faults. This paragenesis is represented by a triplet of mutually perpendicular joint sets. The technique uses the qualitative approach to establish the rank hierarchy of fractures and stress fields on the basis of genetic subordination. We collect and analyse the data on tectonic fractures identified from a number of indicators, the main of which are the geometric structure of the (systematic or chaotic fracture system, and shear type of fractures. The new technique can be applied to analyse other genetic types of fractures (primary, hypergenic, provided that tectonic stresses were significantly involved in fracturing, which is evidenced by the corresponding indicators. Methods for conducting geological and structural observations are uniform for all sites and points, and increasing the number of observation points provides for a more effective use of the new technique. In our paper, we give specific parameters for constructing circle fracture diagrams. All the maximums in the diagram are involved in the analysis for comparison with the standard patterns. Errors caused by random coincidence are minimized

  10. A Rigorous, Compositional, and Extensible Framework for Dynamic Fault Tree Analysis

    NARCIS (Netherlands)

    Boudali, H.; Sandhu, R.; Crouzen, Pepijn; Stoelinga, Mariëlle Ida Antoinette

    Fault trees (FT) are among the most prominent formalisms for reliability analysis of technical systems. Dynamic FTs extend FTs with support for expressing dynamic dependencies among components. The standard analysis vehicle for DFTs is state-based, and treats the model as a CTMC, a continuous-time

  11. Measurement and analysis on dynamic behaviour of parallel-plate assembly in nuclear reactors

    International Nuclear Information System (INIS)

    Chen Junjie; Guo Changqing; Zou Changchuan

    1997-01-01

    Measurement and analysis on dynamic behaviour of parallel-plate assembly in nuclear reactors have been explored. The electromagnetic method, a new method of measuring and analysing dynamic behaviour with the parallel-plate assembly as the structure of multi-parallel-beams joining with single-beam, has been presented. Theoretical analysis and computation results of dry-modal natural frequencies show good agreement with experimental measurements

  12. Remote sensing analysis for fault-zones detection in the Central Andean Plateau (Catamarca, Argentina)

    Science.gov (United States)

    Traforti, Anna; Massironi, Matteo; Zampieri, Dario; Carli, Cristian

    2015-04-01

    Remote sensing techniques have been extensively used to detect the structural framework of investigated areas, which includes lineaments, fault zones and fracture patterns. The identification of these features is fundamental in exploration geology, as it allows the definition of suitable sites for the exploitation of different resources (e.g. ore mineral, hydrocarbon, geothermal energy and groundwater). Remote sensing techniques, typically adopted in fault identification, have been applied to assess the geological and structural framework of the Laguna Blanca area (26°35'S-66°49'W). This area represents a sector of the south-central Andes localized in the Argentina region of Catamarca, along the south-eastern margin of the Puna plateau. The study area is characterized by a Precambrian low-grade metamorphic basement intruded by Ordovician granitoids. These rocks are unconformably covered by a volcano-sedimentary sequence of Miocene age, followed by volcanic and volcaniclastic rocks of Upper Miocene to Plio-Pleistocene age. All these units are cut by two systems of major faults, locally characterized by 15-20 m wide damage zones. The detection of main tectonic lineaments in the study area was firstly carried out by classical procedures: image sharpening of Landsat 7 ETM+ images, directional filters applied to ASTER images, medium resolution Digital Elevation Models analysis (SRTM and ASTER GDEM) and hill shades interpretation. In addition, a new approach in fault zone identification, based on multispectral satellite images classification, has been tested in the Laguna Blanca area and in other sectors of south-central Andes. In this perspective, several prominent fault zones affecting basement and granitoid rocks have been sampled. The collected fault gouge samples have been analyzed with a Field-Pro spectrophotometer mounted on a goniometer. We acquired bidirectional reflectance spectra, from 0.35μm to 2.5μm with 1nm spectral sampling, of the sampled fault rocks

  13. Kinematics analysis and simulation of a new underactuated parallel robot

    Directory of Open Access Journals (Sweden)

    Wenxu YAN

    2017-04-01

    Full Text Available The number of degrees of freedom is equal to the number of the traditional robot driving motors, which causes defects such as low efficiency. To overcome that problem, based on the traditional parallel robot, a new underactuated parallel robot is presented. The structure characteristics and working principles of the underactuated parallel robot are analyzed. The forward and inverse solutions are derived by way of space analytic geometry and vector algebra. The kinematics model is established, and MATLAB is implied to verify the accuracy of forward and inverse solutions and identify the optimal work space. The simulation results show that the robot can realize the function of robot switch with three or four degrees of freedom when the number of driving motors is three, improving the efficiency of robot grasping, with the characteristics of large working space, high speed operation, high positioning accuracy, low manufacturing cost and so on, and it will have a wide range of industrial applications.

  14. Fault tree analysis of loss of cooling to a HALW storage tank

    International Nuclear Information System (INIS)

    Nomura, Yasushi

    1992-01-01

    Results of a scenario identification, a fault tree construction and an analysis for a loss of cooling accident in a High Activity Liquid Waste (HALW) tank of a typical model of reprocessing facility, is rendered together with considerations of the system reliability improvement by changing the model design. Model plant data, basic failure frequency data and a fault tree analysis named FTL have been introduced from NUKEM GmbH, Germany. They are throughly reviewed and reevaluated at JAERI, and improved to apply to Japanese facilities. A general systematic method for constructing fault trees is used to avoid missing scenarios, thus all of the 10 conceivable accident scenarios for 'HALW storage tank without cooling, HALW boiling' are identified, and a total failure frequency are calculated to be in the 90 % confidence interval of (1.1 ∼ 5.8) x 10 -6 /yr for the German model plant. (author)

  15. Topic Correlation Analysis for Bearing Fault Diagnosis Under Variable Operating Conditions

    Science.gov (United States)

    Chen, Chao; Shen, Fei; Yan, Ruqiang

    2017-05-01

    This paper presents a Topic Correlation Analysis (TCA) based approach for bearing fault diagnosis. In TCA, Joint Mixture Model (JMM), a model which adapts Probability Latent Semantic Analysis (PLSA), is constructed first. Then, JMM models the shared and domain-specific topics using “fault vocabulary” . After that, the correlations between two kinds of topics are computed and used to build a mapping matrix. Furthermore, a new shared space spanned by the shared and mapped domain-specific topics is set up where the distribution gap between different domains is reduced. Finally, a classifier is trained with mapped features which follow a different distribution and then the trained classifier is tested on target bearing data. Experimental results justify the superiority of the proposed approach over the stat-of-the-art baselines and it can diagnose bearing fault efficiently and effectively under variable operating conditions.

  16. Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant

    Science.gov (United States)

    Ferguson, Thomas A.; Lu, Lixuan

    2017-09-01

    The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.

  17. Basic design of parallel computational program for probabilistic structural analysis

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  18. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  19. Fault Tolerant Control System Design Using Automated Methods from Risk Analysis

    DEFF Research Database (Denmark)

    Blanke, M.

    Fault tolerant controls have the ability to be resilient to simple faults in control loop components.......Fault tolerant controls have the ability to be resilient to simple faults in control loop components....

  20. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  1. Resonance analysis in parallel voltage-controlled Distributed Generation inverters

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Blaabjerg, Frede; Chen, Zhe

    2013-01-01

    Thanks to the fast responses of the inner voltage and current control loops, the dynamic behaviors of parallel voltage-controlled Distributed Generation (DG) inverters not only relies on the stability of load sharing among them, but subjects to the interactions between the voltage control loops...

  2. Analytical and experimental analysis of a parallel leaf spring guidance

    NARCIS (Netherlands)

    Meijaard, Jacob Philippus; Brouwer, Dannis Michel; Jonker, Jan B.; Denier, J.; Finn, M.

    2008-01-01

    A parallel leaf spring guidance is defined as a benchmark problem for flexible multibody formalisms and codes. The mechanism is loaded by forces and an additional moment or misalignment. Buckling loads, changes in compliance and frequencies, and large-amplitude vibrations are calculated. A

  3. Method of reliability allocation based on fault tree analysis and fuzzy math in nuclear power plants

    International Nuclear Information System (INIS)

    Chen Zhaobing; Deng Jian; Cao Xuewu

    2005-01-01

    Reliability allocation is a kind of a difficult multi-objective optimization problem. It can not only be applied to determine the reliability characteristic of reactor systems, subsystem and main components but also be performed to improve the design, operation and maintenance of nuclear plants. The fuzzy math known as one of the powerful tools for fuzzy optimization and the fault analysis deemed to be one of the effective methods of reliability analysis can be applied to the reliability allocation model so as to work out the problems of fuzzy characteristic of some factors and subsystem's choice respectively in this paper. Thus we develop a failure rate allocation model on the basis of the fault tree analysis and fuzzy math. For the choice of the reliability constraint factors, we choose the six important ones according to practical need for conducting the reliability allocation. The subsystem selected by the top-level fault tree analysis is to avoid allocating reliability for all the equipment and components including the unnecessary parts. During the reliability process, some factors can be calculated or measured quantitatively while others only can be assessed qualitatively by the expert rating method. So we adopt fuzzy decision and dualistic contrast to realize the reliability allocation with the help of fault tree analysis. Finally the example of the emergency diesel generator's reliability allocation is used to illustrate reliability allocation model and improve this model simple and applicable. (authors)

  4. Radiometric dating of brittle fault rocks; illite polytype age analysis and application to the Spanish Pyrenees.

    Science.gov (United States)

    van der Pluijm, B. A.; Haines, S. H.

    2008-12-01

    A variety of approaches have been available to indirectly date the timing of deformation and motion on faults, but few approaches for direct, radiometric dating of shallow crustal fault rocks were available until recently. The growing recognition of clay neomineralization at low temperatures in many fault rocks, particularly the 1Md illite polytype, allows the successful application of Ar dating to these K-bearing phases. In this presentation we will discuss our recent illite age analysis approach (sampling, treatments, analytical methods), and present new results from fault dating along the Spanish Pyrenean orogenic front as an example. X-ray quantification of polytype ratios in three or more size fractions is used to define a mixing line between (1Md illite) authigenic and (2M illite) detrital end-member phases that constrain the fault age and host rock provenance/cooling age for each fault. The common problem of recoil in clays is addressed by encapsulating samples before irradiation. Nine fault gouge ages in the south-central and south-eastern Pyrenees support several contractional pulses in the Pyrenean orogen: 1) Late Cretaceous thrusting (Boixols), 2) Latest Paleocene-Early Eocene deformation (Nogueres Zone and Freser antiformal stack), 3) Middle-Late Eocene deformation (Ripoll syncline, Vallfogona, Gavernie, Abocador and L'Escala thrusts), and 4) Middle Oligocene thrusting in the central portion of the Axial Zone (Llavorsi-Senet). The late Paleocene-Early Eocene and Middle-Late Eocene events may or may not be one single phase, due to slightly overlapping error estimates. The outboard thrusts give Hercynian ages for the detrital component of the fault rock, while the inboard thrusts, which juxtapose metamorphic units, give Cretaceous ages for the non-authigenic component, reflecting the cooling age of the adjacent wallrocks. Based on our latest work, the illite polytype dating method complements previously developed illite-smectite dating (van der Pluijm et

  5. Design and analysis of linear fault-tolerant permanent-magnet vernier machines.

    Science.gov (United States)

    Xu, Liang; Ji, Jinghua; Liu, Guohai; Du, Yi; Liu, Hu

    2014-01-01

    This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis.

  6. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    Science.gov (United States)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  7. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    International Nuclear Information System (INIS)

    Jiang Li; Shi Tielin; Xuan Jianping

    2012-01-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  8. Dynamics Modeling and Analysis of Local Fault of Rolling Element Bearing

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2015-01-01

    Full Text Available This paper presents a nonlinear vibration model of rolling element bearings with 5 degrees of freedom based on Hertz contact theory and relevant bearing knowledge of kinematics and dynamics. The slipping of ball, oil film stiffness, and the nonlinear time-varying stiffness of the bearing are taken into consideration in the model proposed here. The single-point local fault model of rolling element bearing is introduced into the nonlinear model with 5 degrees of freedom according to the loss of the contact deformation of ball when it rolls into and out of the local fault location. The functions of spall depth corresponding to defects of different shapes are discussed separately in this paper. Then the ode solver in Matlab is adopted to perform a numerical solution on the nonlinear vibration model to simulate the vibration response of the rolling elements bearings with local fault. The simulation signals analysis results show a similar behavior and pattern to that observed in the processed experimental signals of rolling element bearings in both time domain and frequency domain which validated the nonlinear vibration model proposed here to generate typical rolling element bearings local fault signals for possible and effective fault diagnostic algorithms research.

  9. Structural system reliability calculation using a probabilistic fault tree analysis method

    Science.gov (United States)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  10. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    Science.gov (United States)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  11. Analysis on inflowing of the injecting Water in faulted formation

    Directory of Open Access Journals (Sweden)

    Ji Youjun

    2015-06-01

    Full Text Available As to low permeability reservoir, faults and fractures have a significant impact on effect of water injection and may lead up to the lower efficiency of oil displacement, which will bring about low efficiency of injecting water, and the intended purpose of improving recovery factor by water injection will not be reached. In order to reveal the mechanism for channeling of injecting water, research work is conducted as follows: First of all, based on seepage mechanics, fluid mechanics, rock mass mechanics, and multifield coupling theory, the mathematical model considering fluid–solid coupling of water-flooding development for low permeability reservoir is established, the numerical solution of the coupling model is obtained, and by creating an interface program between the seepage simulation procedure and stress computation program, we set up a feasible method to simulate the process of development of reservoir considering deformation of reservoir stratum; second, some cores are selected to test the stress sensitivity of rock in reservoir, and the relation of permeability and stress is proposed to connect the field parameters of the coupling model; finally, taking the S11 block of Daqing Oilfield, for instance, the seepage field and deformation of reservoir stratum is analyzed, and then the mechanism for leakage of injecting water in this block is given out, and the advice for adjustment of injection–production scheme in the future development stage is provided.

  12. Data-driven fault detection for industrial processes canonical correlation analysis and projection based methods

    CERN Document Server

    Chen, Zhiwen

    2017-01-01

    Zhiwen Chen aims to develop advanced fault detection (FD) methods for the monitoring of industrial processes. With the ever increasing demands on reliability and safety in industrial processes, fault detection has become an important issue. Although the model-based fault detection theory has been well studied in the past decades, its applications are limited to large-scale industrial processes because it is difficult to build accurate models. Furthermore, motivated by the limitations of existing data-driven FD methods, novel canonical correlation analysis (CCA) and projection-based methods are proposed from the perspectives of process input and output data, less engineering effort and wide application scope. For performance evaluation of FD methods, a new index is also developed. Contents A New Index for Performance Evaluation of FD Methods CCA-based FD Method for the Monitoring of Stationary Processes Projection-based FD Method for the Monitoring of Dynamic Processes Benchmark Study and Real-Time Implementat...

  13. System Analysis by Mapping a Fault-tree into a Bayesian-network

    Science.gov (United States)

    Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.

    2018-05-01

    In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.

  14. Detection of Early Faults in Rotating Machinery Based on Wavelet Analysis

    Directory of Open Access Journals (Sweden)

    Meng Hee Lim

    2013-01-01

    Full Text Available This paper explores the application of wavelet analysis for the detection of early changes in rotor dynamics caused by common machinery faults, namely, rotor unbalance and minor blade rubbing conditions. In this paper, the time synchronised wavelet analysis method was formulated and its effectiveness to detect machinery faults at the early stage was evaluated based on signal simulation and experimental study. The proposed method provides a more standardised approach to visualise the current state of rotor dynamics of a rotating machinery by taking into account the effects of time shift, wavelet edge distortion, and system noise suppression. The experimental results showed that this method is able to reveal subtle changes of the vibration signal characteristics in both the frequency content distribution and the amplitude distortion caused by minor rotor unbalance and blade rubbing conditions. Besides, this method also appeared to be an effective tool to diagnose and to discriminate the different types of machinery faults based on the unique pattern of the wavelet contours. This study shows that the proposed wavelet analysis method is promising to reveal machinery faults at early stage as compared to vibration spectrum analysis.

  15. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  16. Capacity Analysis for Parallel Runway through Agent-Based Simulation

    Directory of Open Access Journals (Sweden)

    Yang Peng

    2013-01-01

    Full Text Available Parallel runway is the mainstream structure of China hub airport, runway is often the bottleneck of an airport, and the evaluation of its capacity is of great importance to airport management. This study outlines a model, multiagent architecture, implementation approach, and software prototype of a simulation system for evaluating runway capacity. Agent Unified Modeling Language (AUML is applied to illustrate the inbound and departing procedure of planes and design the agent-based model. The model is evaluated experimentally, and the quality is studied in comparison with models, created by SIMMOD and Arena. The results seem to be highly efficient, so the method can be applied to parallel runway capacity evaluation and the model propose favorable flexibility and extensibility.

  17. Kinematic Analysis and Optimization of a New Compliant Parallel Micromanipulator

    Directory of Open Access Journals (Sweden)

    Qingsong Xu

    2008-11-01

    Full Text Available In this paper, a new three translational degrees of freedom (DOF compliant parallel micromanipulator (CPM is proposed, which has an excellent accuracy of parallel mechanisms with flexure hinges. The system is established by a proper selection of hardware and analyzed via the derived pseudo-rigid-body model. In view of the physical constraints imposed by both the piezoelectric actuators and flexure hinges, the CPM's reachable workspace is determined analytically, where a maximum cylinder defined as an usable workspace can be inscribed. Moreover, the optimal design of the CPM with the consideration of the usable workspace size and global dexterity index simultaneously is carried out by utilizing the approaches of direct search method, genetic algorithm (GA, and particle swarm optimization (PSO, respectively. The simulation results show that the PSO is the best method for the optimization, and the results are valuable in the design of a new micromanipulator.

  18. Characteristic Analysis and Fault-Tolerant Control of Circulating Current for Modular Multilevel Converters under Sub-Module Faults

    Directory of Open Access Journals (Sweden)

    Wen Wu

    2017-11-01

    Full Text Available A modular multilevel converter (MMC is considered to be a promising topology for medium- or high-power applications. However, a significantly increased amount of sub-modules (SMs in each arm also increase the risk of failures. Focusing on the fault-tolerant operation issue for the MMC under SM faults, the operation characteristics of MMC with different numbers of faulty SMs in the arms are analyzed and summarized in this paper. Based on the characteristics, a novel circulating current-suppressing (CCS fault-tolerant control strategy comprised of a basic control unit (BCU and virtual resistance compensation control unit (VRCCU in two parts is proposed, which has three main features: (i it can suppress the multi-different frequency components of the circulating current under different SM fault types simultaneously; (ii it can help fast limiting of the transient fault current caused at the faulty SM bypassed moment; and (iii it does not need extra communication systems to acquire the information of the number of faulty SMs. Moreover, by analyzing the stability performance of the proposed controller using the Root-Locus criterion, the election principle of the value of virtual resistance is revealed. Finally, the efficiency of the control strategy is confirmed with the simulation and experiment studies under different fault conditions.

  19. A parallel implementation of 3D Zernike moment analysis

    OpenAIRE

    Berjón Díez, Daniel; Arnaldo Duart, Sergio; Morán Burgos, Francisco

    2011-01-01

    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3...

  20. Analysis on fault current limiting and recovery characteristics of a flux-lock type SFCL with an isolated transformer

    International Nuclear Information System (INIS)

    Ko, Seckcheol; Lim, Sung-Hun; Han, Tae-Hee

    2013-01-01

    Highlights: ► Countermeasure to reduce the power burden of HTSC element consisting of the flux-lock type SFCL was studied. ► The power burden of HTSC element could be decreased by using the isolated transformer. ► The SFCL designed with the additive polarity winding could be confirmed to cause less power burden of the HTSC element. -- Abstract: The flux-lock type superconducting fault current limiter (SFCL) can quickly limit the fault current shortly after the short circuit occurs and recover the superconducting state after the fault removes. However, the superconducting element comprising the flux-lock type SFCL can be destructed when the high fault current passes through the SFCL. Therefore, the countermeasure to control the fault current and protect the superconducting element is required. In this paper, the flux-lock type SFCL with an isolated transformer, which consists of two parallel connected coils on an iron core and the isolated transformer connected in series with one of two coils, was proposed and the short-circuit experimental device to analyze the fault current limiting and the recovery characteristics of the flux-lock type SFCL with the isolated transformer were constructed. Through the short-circuit tests, the flux-lock type SFCL with the isolated transformer was confirmed to perform more effective fault current limiting and recovery operation compared to the flux-lock type SFCL without the isolated transformer from the viewpoint of the quench occurrence and the recovery time of the SFCL

  1. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    Science.gov (United States)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  2. Failure mode effect analysis and fault tree analysis as a combined methodology in risk management

    Science.gov (United States)

    Wessiani, N. A.; Yoshio, F.

    2018-04-01

    There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.

  3. Post Fire Safe Shutdown Analysis Using a Fault Tree Logic Model

    International Nuclear Information System (INIS)

    Yim, Hyun Tae; Park, Jun Hyun

    2005-01-01

    Every nuclear power plant should have its own fire hazard analysis including the fire safe shutdown analysis. A safe shutdown (SSD) analysis is performed to demonstrate the capability of the plant to safely shut down for a fire in any given area. The basic assumption is that there will be fire damage to all cables and equipment located within a common fire area. When evaluating the SSD capabilities of the plant, based on a review of the systems, equipment and cables within each fire area, it should be determined which shutdown paths are either unaffected or least impacted by a postulated fire within the fire area. Instead of seeking a success path for safe shutdown given all cables and equipment damaged by a fire, there can be an alternative approach to determine the SSD capability: fault tree analysis. This paper introduces the methodology for fire SSD analysis using a fault tree logic model

  4. What does fault tolerant Deep Learning need from MPI?

    Energy Technology Data Exchange (ETDEWEB)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.; Daily, Jeffrey A.

    2017-09-25

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for a fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.

  5. Evaluating parallel relational databases for medical data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; Wilson, Andrew T.

    2012-03-01

    Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

  6. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  7. Analysis and Design of Embedded Controlled Parallel Resonant Converter

    Directory of Open Access Journals (Sweden)

    P. CHANDRASEKHAR

    2009-07-01

    Full Text Available Microcontroller based constant frequency controlled full bridge LC parallel resonant converter is presented in this paper for electrolyser application. An electrolyser is a part of renewable energy system which generates hydrogen from water electrolysis. The DC power required by the electrolyser system is supplied by the DC-DC converter. Owing to operation under constant frequency, the filter designs are simplified and utilization of magnetic components is improved. This converter has advantages like high power density, low EMI and reduced switching stresses. DC-DC converter system is simulated using MATLAB, Simulink. Detailed simulation results are presented. The simulation results are compared with the experimental results.

  8. Machine Fault Detection Based on Filter Bank Similarity Features Using Acoustic and Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Mauricio Holguín-Londoño

    2016-01-01

    Full Text Available Vibration and acoustic analysis actively support the nondestructive and noninvasive fault diagnostics of rotating machines at early stages. Nonetheless, the acoustic signal is less used because of its vulnerability to external interferences, hindering an efficient and robust analysis for condition monitoring (CM. This paper presents a novel methodology to characterize different failure signatures from rotating machines using either acoustic or vibration signals. Firstly, the signal is decomposed into several narrow-band spectral components applying different filter bank methods such as empirical mode decomposition, wavelet packet transform, and Fourier-based filtering. Secondly, a feature set is built using a proposed similarity measure termed cumulative spectral density index and used to estimate the mutual statistical dependence between each bandwidth-limited component and the raw signal. Finally, a classification scheme is carried out to distinguish the different types of faults. The methodology is tested in two laboratory experiments, including turbine blade degradation and rolling element bearing faults. The robustness of our approach is validated contaminating the signal with several levels of additive white Gaussian noise, obtaining high-performance outcomes that make the usage of vibration, acoustic, and vibroacoustic measurements in different applications comparable. As a result, the proposed fault detection based on filter bank similarity features is a promising methodology to implement in CM of rotating machinery, even using measurements with low signal-to-noise ratio.

  9. Seismic Margin Assessment for Research Reactor using Fragility based Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwag, Shinyoung; Oh, Jinho; Lee, Jong-Min; Ryu, Jeong-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The research reactor has been often subjected to external hazards during the design lifetime. Especially, a seismic event can be one of significant threats to the failure of structure system of the research reactor. This failure is possibly extended to the direct core damage of the reactor. For this purpose, the fault tree for structural system failure leading to the core damage under an earthquake accident is developed. The failure probabilities of basic events are evaluated as fragility curves of log-normal distributions. Finally, the plant-level seismic margin is investigated by the fault tree analysis combining with fragility data and the critical path is identified. The plant-level probabilistic seismic margin assessment using the fragility based fault tree analysis was performed for quantifying the safety of research reactor to a seismic hazard. For this, the fault tree for structural system failure leading to the core damage of the reactor under a seismic accident was developed. The failure probabilities of basic events were evaluated as fragility curves of log-normal distributions.

  10. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  11. STEM - software test and evaluation methods: fault detection using static analysis techniques

    International Nuclear Information System (INIS)

    Bishop, P.G.; Esp, D.G.

    1988-08-01

    STEM is a software reliability project with the objective of evaluating a number of fault detection and fault estimation methods which can be applied to high integrity software. This Report gives some interim results of applying both manual and computer-based static analysis techniques, in particular SPADE, to an early CERL version of the PODS software containing known faults. The main results of this study are that: The scope for thorough verification is determined by the quality of the design documentation; documentation defects become especially apparent when verification is attempted. For well-defined software, the thoroughness of SPADE-assisted verification for detecting a large class of faults was successfully demonstrated. For imprecisely-defined software (not recommended for high-integrity systems) the use of tools such as SPADE is difficult and inappropriate. Analysis and verification tools are helpful, through their reliability and thoroughness. However, they are designed to assist, not replace, a human in validating software. Manual inspection can still reveal errors (such as errors in specification and errors of transcription of systems constants) which current tools cannot detect. There is a need for tools to automatically detect typographical errors in system constants, for example by reporting outliers to patterns. To obtain the maximum benefit from advanced tools, they should be applied during software development (when verification problems can be detected and corrected) rather than retrospectively. (author)

  12. Fault tree analysis: A survey of the state-of-the-art in modeling, analysis and tools

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Stoelinga, Mariëlle Ida Antoinette

    2015-01-01

    Fault tree analysis (FTA) is a very prominent method to analyze the risks related to safety and economically critical assets, like power plants, airplanes, data centers and web shops. FTA methods comprise of a wide variety of modelling and analysis techniques, supported by a wide range of software

  13. Fault Tree Analysis: A survey of the state-of-the-art in modeling, analysis and tools

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Stoelinga, Mariëlle Ida Antoinette

    2014-01-01

    Fault tree analysis (FTA) is a very prominent method to analyze the risks related to safety and economically critical assets, like power plants, airplanes, data centers and web shops. FTA methods comprise of a wide variety of modelling and analysis techniques, supported by a wide range of software

  14. α-Cut method based importance measure for criticality analysis in fuzzy probability – Based fault tree analysis

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Widodo, Surip; Tjahjono, Hendro

    2017-01-01

    Highlights: •FPFTA deals with epistemic uncertainty using fuzzy probability. •Criticality analysis is important for reliability improvement. •An α-cut method based importance measure is proposed for criticality analysis in FPFTA. •The α-cut method based importance measure utilises α-cut multiplication, α-cut subtraction, and area defuzzification technique. •Benchmarking confirm that the proposed method is feasible for criticality analysis in FPFTA. -- Abstract: Fuzzy probability – based fault tree analysis (FPFTA) has been recently developed and proposed to deal with the limitations of conventional fault tree analysis. In FPFTA, reliabilities of basic events, intermediate events and top event are characterized by fuzzy probabilities. Furthermore, the quantification of the FPFTA is based on fuzzy multiplication rule and fuzzy complementation rule to propagate uncertainties from basic event to the top event. Since the objective of the fault tree analysis is to improve the reliability of the system being evaluated, it is necessary to find the weakest path in the system. For this purpose, criticality analysis can be implemented. Various importance measures, which are based on conventional probabilities, have been developed and proposed for criticality analysis in fault tree analysis. However, not one of those importance measures can be applied for criticality analysis in FPFTA, which is based on fuzzy probability. To be fully applied in nuclear power plant probabilistic safety assessment, FPFTA needs to have its corresponding importance measure. The objective of this study is to develop an α-cut method based importance measure to evaluate and rank the importance of basic events for criticality analysis in FPFTA. To demonstrate the applicability of the proposed measure, a case study is performed and its results are then benchmarked to the results generated by the four well known importance measures in conventional fault tree analysis. The results

  15. Parallel imports of hospital pharmaceuticals: An empirical analysis of price effects from parallel imports and the design of procurement procedures in the Danish hospital sector

    OpenAIRE

    Hostenkamp, Gisela; Kronborg, Christian; Arendt, Jacob Nielsen

    2012-01-01

    We analyse pharmaceutical imports in the Danish hospital sector. In this market medicines are publicly tendered using first-price sealed-bid procurement auctions. We analyse whether parallel imports have an effect on pharmaceutical prices and whether the way tenders were organised matters for the competitive effect of parallel imports on prices. Our theoretical analysis shows that the design of the procurement rules affects both market structure and pharmaceutical prices. Parallel imports may...

  16. Structural Analysis Approach to Fault Diagnosis with Application to Fixed-wing Aircraft Motion

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh

    2002-01-01

    The paper presents a structural analysis based method for fault diagnosis purposes. The method uses the structural model of the system and utilizes the matching idea to extract system's inherent redundant information. The structural model is represented by a bipartite directed graph. FDI...... Possibilities are examined by further analysis of the obtained information. The method is illustrated by applying on the LTI model of motion of a fixed-wing aircraft....

  17. Structural Analysis Approach to Fault Diagnosis with Application to Fixed-wing Aircraft Motion

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh

    2001-01-01

    The paper presents a structural analysis based method for fault diagnosis purposes. The method uses the structural model of the system and utilizes the matching idea to extract system's inherent redundant information. The structural model is represented by a bipartite directed graph. FDI...... Possibilities are examined by further analysis of the obtained information. The method is illustrated by applying on the LTI model of motion of a fixed-wing aircraft....

  18. A comparative critical analysis of modern task-parallel runtimes.

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler, Kyle Bruce; Stark, Dylan; Murphy, Richard C.

    2012-12-01

    The rise in node-level parallelism has increased interest in task-based parallel runtimes for a wide array of application areas. Applications have a wide variety of task spawning patterns which frequently change during the course of application execution, based on the algorithm or solver kernel in use. Task scheduling and load balance regimes, however, are often highly optimized for specific patterns. This paper uses four basic task spawning patterns to quantify the impact of specific scheduling policy decisions on execution time. We compare the behavior of six publicly available tasking runtimes: Intel Cilk, Intel Threading Building Blocks (TBB), Intel OpenMP, GCC OpenMP, Qthreads, and High Performance ParalleX (HPX). With the exception of Qthreads, the runtimes prove to have schedulers that are highly sensitive to application structure. No runtime is able to provide the best performance in all cases, and those that do provide the best performance in some cases, unfortunately, provide extremely poor performance when application structure does not match the schedulers assumptions.

  19. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Science.gov (United States)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.

  20. An Integrated Approach of Model checking and Temporal Fault Tree for System Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2009-10-15

    Digitalization of instruments and control systems in nuclear power plants offers the potential to improve plant safety and reliability through features such as increased hardware reliability and stability, and improved failure detection capability. It however makes the systems and their safety analysis more complex. Originally, safety analysis was applied to hardware system components and formal methods mainly to software. For software-controlled or digitalized systems, it is necessary to integrate both. Fault tree analysis (FTA) which has been one of the most widely used safety analysis technique in nuclear industry suffers from several drawbacks as described in. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA.

  1. The integration of expert-defined importance factors to enrich Bayesian Fault Tree Analysis

    International Nuclear Information System (INIS)

    Darwish, Molham; Almouahed, Shaban; Lamotte, Florent de

    2017-01-01

    This paper proposes an analysis of a hybrid Bayesian-Importance model for system designers to improve the quality of services related to Active Assisted Living Systems. The proposed model is based on two factors: failure probability measure of different service components and, an expert defined degree of importance that each component holds for the success of the corresponding service. The proposed approach advocates the integration of expert-defined importance factors to enrich the Bayesian Fault Tree Analysis (FTA) approach. The evaluation of the proposed approach is conducted using the Fault Tree Analysis formalism where the undesired state of a system is analyzed using Boolean logic mechanisms to combine a series of lower-level events.

  2. Methodology for reliability allocation based on fault tree analysis and dualistic contrast

    Institute of Scientific and Technical Information of China (English)

    TONG Lili; CAO Xuewu

    2008-01-01

    Reliability allocation is a difficult multi-objective optimization problem.This paper presents a methodology for reliability allocation that can be applied to determine the reliability characteristics of reactor systems or subsystems.The dualistic contrast,known as one of the most powerful tools for optimization problems,is applied to the reliability allocation model of a typical system in this article.And the fault tree analysis,deemed to be one of the effective methods of reliability analysis,is also adopted.Thus a failure rate allocation model based on the fault tree analysis and dualistic contrast is achieved.An application on the emergency diesel generator in the nuclear power plant is given to illustrate the proposed method.

  3. A Signal Based Triangular Structuring Element for Mathematical Morphological Analysis and Its Application in Rolling Element Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhaowen Chen

    2014-01-01

    Full Text Available Mathematical morphology (MM is an efficient nonlinear signal processing tool. It can be adopted to extract fault information from bearing signal according to a structuring element (SE. Since the bearing signal features differ for every unique cause of failure, the SEs should be well tailored to extract the fault feature from a particular signal. In the following, a signal based triangular SE according to the statistics of the magnitude of a vibration signal is proposed, together with associated methodology, which processes the bearing signal by MM analysis based on proposed SE to get the morphology spectrum of a signal. A correlation analysis on morphology spectrum is then employed to obtain the final classification of bearing faults. The classification performance of the proposed method is evaluated by a set of bearing vibration signals with inner race, ball, and outer race faults, respectively. Results show that all faults can be detected clearly and correctly. Compared with a commonly used flat SE, the correlation analysis on morphology spectrum with proposed SE gives better performance at fault diagnosis of bearing, especially the identification of the location of outer race fault and the level of fault severity.

  4. Enhanced DET-Based Fault Signature Analysis for Reliable Diagnosis of Single and Multiple-Combined Bearing Defects

    Directory of Open Access Journals (Sweden)

    In-Kyu Jeong

    2015-01-01

    Full Text Available To early identify cylindrical roller bearing failures, this paper proposes a comprehensive bearing fault diagnosis method, which consists of spectral kurtosis analysis for finding the most informative subband signal well representing abnormal symptoms about the bearing failures, fault signature calculation using this subband signal, enhanced distance evaluation technique- (EDET- based fault signature analysis that outputs the most discriminative fault features for accurate diagnosis, and identification of various single and multiple-combined cylindrical roller bearing defects using the simplified fuzzy adaptive resonance map (SFAM. The proposed comprehensive bearing fault diagnosis methodology is effective for accurate bearing fault diagnosis, yielding an average classification accuracy of 90.35%. In this paper, the proposed EDET specifically addresses shortcomings in the conventional distance evaluation technique (DET by accurately estimating the sensitivity of each fault signature for each class. To verify the efficacy of the EDET-based fault signature analysis for accurate diagnosis, a diagnostic performance comparison is carried between the proposed EDET and the conventional DET in terms of average classification accuracy. In fact, the proposed EDET achieves up to 106.85% performance improvement over the conventional DET in average classification accuracy.

  5. Using recurrence plot analysis for software execution interpretation and fault detection

    Science.gov (United States)

    Mosdorf, M.

    2015-09-01

    This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.

  6. AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment

    Science.gov (United States)

    2014-10-01

    Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The

  7. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    Science.gov (United States)

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A study on the fault diagnostic techniques for reactor internal structures using neutron noise analysis

    International Nuclear Information System (INIS)

    Kim, Tae Ryong; Jeong, Seong Ho; Park, Jin Ho; Park, Jin Suk

    1994-08-01

    The unfavorable phenomena, such as flow induced vibration and aging process in reactor internals, cause degradation of structural integrity and may result in loosing some mechanical binding components which might impact other equipments and components or cause flow blockage. Since these malfunctions and potential failures change reactor noise signal, it is necessary to analyze reactor noise signal for early fault diagnosis in the point of few of safety and plant economics. The objectives of this study are to establish fault diagnostic and TS(thermal shield), and to develop a data acquisition and signal processing software system. In the first year of this study, an analysis technique for the reactor internal vibration using the reactor noise was proposed. With the technique proposed and the reactor noise signals (ex-core neutron and acceleration), the dynamic characteristics of Ulchin-1 reactor internals were obtained, and compared with those of Tricastin-1 which is the prototype of Ulchin-1. In the second year, a PC-based expert system for reactor internals fault diagnosis is developed, which included data acquisition, signal processing, feature extraction function, and represented diagnostic knowledge by the IF-THEN rule. To know the effect of the faults, the reactor internals of Ulchin-1 is modeled using FEM and simulated with an artificial defect given in the hold-down spring. Trend in the dynamic characteristics of reactor internals is also observed during one fuel cycle to know the effect of boron concentration. 100 figs, 7 tabs, 18 refs. (Author)

  9. Stealthy Hardware Trojan Based Algebraic Fault Analysis of HIGHT Block Cipher

    Directory of Open Access Journals (Sweden)

    Hao Chen

    2017-01-01

    Full Text Available HIGHT is a lightweight block cipher which has been adopted as a standard block cipher. In this paper, we present a bit-level algebraic fault analysis (AFA of HIGHT, where the faults are perturbed by a stealthy HT. The fault model in our attack assumes that the adversary is able to insert a HT that flips a specific bit of a certain intermediate word of the cipher once the HT is activated. The HT is realized by merely 4 registers and with an extremely low activation rate of about 0.000025. We show that the optimal location for inserting the designed HT can be efficiently determined by AFA in advance. Finally, a method is proposed to represent the cipher and the injected faults with a merged set of algebraic equations and the master key can be recovered by solving the merged equation system with an SAT solver. Our attack, which fully recovers the secret master key of the cipher in 12572.26 seconds, requires three times of activation on the designed HT. To the best of our knowledge, this is the first Trojan attack on HIGHT.

  10. Machinery Fault Diagnosis Using Two-Channel Analysis Method Based on Fictitious System Frequency Response Function

    Directory of Open Access Journals (Sweden)

    Kihong Shin

    2015-01-01

    Full Text Available Most existing techniques for machinery health monitoring that utilize measured vibration signals usually require measurement points to be as close as possible to the expected fault components of interest. This is particularly important for implementing condition-based maintenance since the incipient fault signal power may be too small to be detected if a sensor is located further away from the fault source. However, a measurement sensor is often not attached to the ideal point due to geometric or environmental restrictions. In such a case, many of the conventional diagnostic techniques may not be successfully applicable. In this paper, a two-channel analysis method is proposed to overcome such difficulty. It uses two vibration signals simultaneously measured at arbitrary points in a machine. The proposed method is described theoretically by introducing a fictitious system frequency response function. It is then verified experimentally for bearing fault detection. The results show that the suggested method may be a good alternative when ideal points for measurement sensors are not readily available.

  11. Temperature-dependent stability of stacking faults in Al, Cu and Ni: first-principles analysis.

    Science.gov (United States)

    Bhogra, Meha; Ramamurty, U; Waghmare, Umesh V

    2014-09-24

    We present comparative analysis of microscopic mechanisms relevant to plastic deformation of the face-centered cubic (FCC) metals Al, Cu, and Ni, through determination of the temperature-dependent free energies of intrinsic and unstable stacking faults along [1 1̄ 0] and [1 2̄ 1] on the (1 1 1) plane using first-principles density-functional-theory-based calculations. We show that vibrational contribution results in significant decrease in the free energy of barriers and intrinsic stacking faults (ISFs) of Al, Cu, and Ni with temperature, confirming an important role of thermal fluctuations in the stability of stacking faults (SFs) and deformation at elevated temperatures. In contrast to Al and Ni, the vibrational spectrum of the unstable stacking fault (USF[1 2̄ 1]) in Cu reveals structural instabilities, indicating that the energy barrier (γusf) along the (1 1 1)[1 2̄ 1] slip system in Cu, determined by typical first-principles calculations, is an overestimate, and its commonly used interpretation as the energy release rate needed for dislocation nucleation, as proposed by Rice (1992 J. Mech. Phys. Solids 40 239), should be taken with caution.

  12. Investigation of the Qadimah Fault in Western Saudi Arabia using Satellite Radar Interferometry and Geomorphology Analysis Techniques

    KAUST Repository

    Smith, Robert

    2012-07-01

    The Qadimah Fault has been mapped as a normal fault running through the middle of a planned $50 billion city. For this reason, there is an urgent need to evaluate the seismic hazard that the fault poses to the new development. Although several geophysical studies have supported the existence of a fault, the driving mechanism remains unclear. While a fault controlled by gravity gliding of the overburden on a mobile salt layer is unlikely to be of concern to the city, one caused by the continued extension of a normal rotational fault due to Red Sea rifting could result in a major earthquake. A number of geomorphology and geodetic techniques were used to better understand the fault. An analysis of topographic data revealed a sharp discontinuity in slope aspect and hanging wall tilting which strongly supports the existence of a normal fault. A GPS survey of an emergent reef platform which revealed a tilted coral surface also indicates that deformation has occurred in the region. An interferometric synthetic aperture radar investigation has also been performed to establish whether active deformation is occurring on the fault. Ground movements that could be consistent with inter-seismic strain accumulation have been observed, although the analysis is restricted by the limited data available. However, a simple fault model suggests that the deformation is unlikely due to continued crustal stretching. This, in addition to the lack of footwall uplift in the topography data, suggests that the fault is more likely controlled by a shallow salt layer. However, more work will need to be done in the future to confirm these findings.

  13. Improved Tensor-Based Singular Spectrum Analysis Based on Single Channel Blind Source Separation Algorithm and Its Application to Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Dan Yang

    2017-04-01

    Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.

  14. Digital tomosynthesis parallel imaging computational analysis with shift and add and back projection reconstruction algorithms.

    Science.gov (United States)

    Chen, Ying; Balla, Apuroop; Rayford II, Cleveland E; Zhou, Weihua; Fang, Jian; Cong, Linlin

    2010-01-01

    Digital tomosynthesis is a novel technology that has been developed for various clinical applications. Parallel imaging configuration is utilised in a few tomosynthesis imaging areas such as digital chest tomosynthesis. Recently, parallel imaging configuration for breast tomosynthesis began to appear too. In this paper, we present the investigation on computational analysis of impulse response characterisation as the start point of our important research efforts to optimise the parallel imaging configurations. Results suggest that impulse response computational analysis is an effective method to compare and optimise imaging configurations.

  15. Research on vibration signal analysis and extraction method of gear local fault

    Science.gov (United States)

    Yang, X. F.; Wang, D.; Ma, J. F.; Shao, W.

    2018-02-01

    Gear is the main connection parts and power transmission parts in the mechanical equipment. If the fault occurs, it directly affects the running state of the whole machine and even endangers the personal safety. So it has important theoretical significance and practical value to study on the extraction of the gear fault signal and fault diagnosis of the gear. In this paper, the gear local fault as the research object, set up the vibration model of gear fault vibration mechanism, derive the vibration mechanism of the gear local fault and analyzes the similarities and differences of the vibration signal between the gear non fault and the gears local faults. In the MATLAB environment, the wavelet transform algorithm is used to denoise the fault signal. Hilbert transform is used to demodulate the fault vibration signal. The results show that the method can denoise the strong noise mechanical vibration signal and extract the local fault feature information from the fault vibration signal..

  16. Analysis for Parallel Execution without Performing Hardware/Software Co-simulation

    OpenAIRE

    Muhammad Rashid

    2014-01-01

    Hardware/software co-simulation improves the performance of embedded applications by executing the applications on a virtual platform before the actual hardware is available in silicon. However, the virtual platform of the target architecture is often not available during early stages of the embedded design flow. Consequently, analysis for parallel execution without performing hardware/software co-simulation is required. This article presents an analysis methodology for parallel execution of ...

  17. The use of fault tree analysis to minimize research reactor downtime

    International Nuclear Information System (INIS)

    Dodd, B.; Wang, C.H.; Anderson, T.V.

    1984-01-01

    For many reasons it is often highly desirable to maintain a research reactor in a continuously operable state and in the event of any failures to minimize the length of the reactor downtime. In order to keep the length of future downtimes to less than ten days for the sixteen year old OSU TRIGA reactor, a fault tree analysis was performed for all of the systems required to maintain the reactor operational. As a result of this analysis, it was possible to determine the critical parts and key components. By examining the availability and delivery times for each of these items, it was then possible to make reasoned decisions relating to the advance purchase of spare parts. This paper outlines the above process, along with examples of fault trees developed, and a recent history of the efficacy of this technique. (author)

  18. Selection the Optimum Suppliers Compound Using a Mixed Model of MADM and Fault Tree Analysis

    Directory of Open Access Journals (Sweden)

    Meysam Azimian

    2017-03-01

    Full Text Available In this paper, an integrated approach of MADM and fault tree analysis (FTA is provided for determining the most reliable combination of suppliers for a strategic product in IUT University. At first, risks of suppliers is estimated by defining the indices for evaluating them, determining their relative status indices and using satisfying and SAW methods. Then, intrinsic risks of utilized equipments in the products are qualified and the final integrated risk for equipments is determined. Finally, through all the different scenarios, the best composition of equipment suppliers is selected by defining the palpable top events and fault tree analysis. The contribution of this paper is about proposing an integrated method of MADM and FTA to determine the most reliable suppliers in order to minimize the final risk of providing a product.

  19. Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoming Xu

    2017-01-01

    Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

  20. A Statistical Parameter Analysis and SVM Based Fault Diagnosis Strategy for Dynamically Tuned Gyroscopes

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Gyro's fault diagnosis plays a critical role in inertia navigation systems for higher reliability and precision. A new fault diagnosis strategy based on the statistical parameter analysis (SPA) and support vector machine(SVM) classification model was proposed for dynamically tuned gyroscopes (DTG). The SPA, a kind of time domain analysis approach, was introduced to compute a set of statistical parameters of vibration signal as the state features of DTG, with which the SVM model, a novel learning machine based on statistical learning theory (SLT), was applied and constructed to train and identify the working state of DTG. The experimental results verify that the proposed diagnostic strategy can simply and effectively extract the state features of DTG, and it outperforms the radial-basis function (RBF) neural network based diagnostic method and can more reliably and accurately diagnose the working state of DTG.

  1. ADaCGH: A parallelized web-based application and R package for the analysis of aCGH data.

    Directory of Open Access Journals (Sweden)

    Ramón Díaz-Uriarte

    Full Text Available BACKGROUND: Copy number alterations (CNAs in genomic DNA have been associated with complex human diseases, including cancer. One of the most common techniques to detect CNAs is array-based comparative genomic hybridization (aCGH. The availability of aCGH platforms and the need for identification of CNAs has resulted in a wealth of methodological studies. METHODOLOGY/PRINCIPAL FINDINGS: ADaCGH is an R package and a web-based application for the analysis of aCGH data. It implements eight methods for detection of CNAs, gains and losses of genomic DNA, including all of the best performing ones from two recent reviews (CBS, GLAD, CGHseg, HMM. For improved speed, we use parallel computing (via MPI. Additional information (GO terms, PubMed citations, KEGG and Reactome pathways is available for individual genes, and for sets of genes with altered copy numbers. CONCLUSIONS/SIGNIFICANCE: ADACGH represents a qualitative increase in the standards of these types of applications: a all of the best performing algorithms are included, not just one or two; b we do not limit ourselves to providing a thin layer of CGI on top of existing BioConductor packages, but instead carefully use parallelization, examining different schemes, and are able to achieve significant decreases in user waiting time (factors up to 45x; c we have added functionality not currently available in some methods, to adapt to recent recommendations (e.g., merging of segmentation results in wavelet-based and CGHseg algorithms; d we incorporate redundancy, fault-tolerance and checkpointing, which are unique among web-based, parallelized applications; e all of the code is available under open source licenses, allowing to build upon, copy, and adapt our code for other software projects.

  2. Extending Differential Fault Analysis to Dynamic S-Box Advanced Encryption Standard Implementations

    Science.gov (United States)

    2014-09-18

    number. As a result decryption is a different function which relies on a different key to efficiently undo the work of encryption . RSA is the most...EXTENDING DIFFERENTIAL FAULT ANALYSIS TO DYNAMIC S-BOX ADVANCED ENCRYPTION STANDARD IMPLEMENTATIONS THESIS Bradley M. Flamm, Civilian AFIT-ENG-T-14-S...ADVANCED ENCRYPTION STANDARD IMPLEMENTATIONS THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of

  3. A Sparsity-based Framework for Resolution Enhancement in Optical Fault Analysis of Integrated Circuits

    Science.gov (United States)

    2015-01-01

    discussions and collaboration. I also want to thank other co-workers for discussions and their contributions, Dr. Helen Fawcett, Dr. Euan Ramsay , Dr...optical fault analysis techniques Gordon E. Moore predicted the rapid decrease in IC dimensions (Moore, 1998) and this decrease continues as predicted...Serrels, K. A., Ramsay , E., Warburton, R. J., and Reid, D. T. (2008). Nanoscale optical microscopy in the vectorial focusing regime. Nature Photonics, 2(5

  4. Transient Analysis of Grid-Connected Wind Turbines with DFIG After an External Short-Circuit Fault

    DEFF Research Database (Denmark)

    Sun, Tao; Chen, Zhe; Blaabjerg, Frede

    2004-01-01

    The fast development of wind power generation brings new requirements for wind turbine integration to the network. After the clearance of an external short-circuit fault, the grid-connected wind turbine should restore its normal operation with minimized power losses. This paper concentrates...... on transient analysis of variable speed wind turbines with doubly fed induction generator (DFIG) after an external short-circuit fault. A simulation model of a MW-level variable speed wind turbine with DFIG developed in PSCAD/EMTDC is presented, and the control and protection schemes are described in detail....... After the clearance of an external short-circuit fault the control schemes manage to restore the wind turbine?s normal operation, and their performances are demonstrated by simulation results both during the fault and after the clearance of the fault....

  5. Risk evaluation method for faults by engineering approach. (2) Application concept of margin analysis utilizing accident sequences

    International Nuclear Information System (INIS)

    Kamiya, Masanobu; Kanaida, Syuuji; Kamiya, Kouichi; Sato, Kunihiko; Kuroiwa, Katsuya

    2016-01-01

    The influence of the fault displacement on the facility should to be evaluated not only by the activity of the fault but also by obtaining risk information by considering scenarios including such as the frequency and the degree of the hazard, which should be an appropriate approach for nuclear safety. An applicable concept of margin analysis utilizing accident sequences for evaluating the influence of the fault displacement is proposed. By use of this analysis, we can evaluate of the safety functions and margin for core damage, verify the efficiency of equipment of portable type and make a decision to take additional measures to reduce the risk by using obtained risk information. (author)

  6. RELOSS, Reliability of Safety System by Fault Tree Analysis

    International Nuclear Information System (INIS)

    Allan, R.N.; Rondiris, I.L.; Adraktas, A.

    1981-01-01

    1 - Description of problem or function: Program RELOSS is used in the reliability/safety assessment of any complex system with predetermined operational logic in qualitative and (if required) quantitative terms. The program calculates the possible system outcomes following an abnormal operating condition and the probability of occurrence, if required. Furthermore, the program deduces the minimal cut or tie sets of the system outcomes and identifies the potential common mode failures. 4. Method of solution: The reliability analysis performed by the program is based on the event tree methodology. Using this methodology, the program develops the event tree of a system or a module of that system and relates each path of this tree to its qualitative and/or quantitative impact on specified system or module outcomes. If the system being analysed is subdivided into modules the program assesses each module in turn as described previously and then combines the module information to obtain results for the overall system. Having developed the event tree of a module or a system, the program identifies which paths lead or do not lead to various outcomes depending on whether the cut or the tie sets of the outcomes are required and deduces the corresponding sets. Furthermore the program identifies for a specific system outcome, the potential common mode failures and the cut or tie sets containing potential dependent failures of some components. 5. Restrictions on the complexity of the problem: The present dimensions of the program are as follows. They can however be easily modified: Maximum number of modules (equivalent components): 25; Maximum number of components in a module: 15; Maximum number of levels of parentheses in a logical statement: 10 Maximum number of system outcomes: 3; Maximum number of module outcomes: 2; Maximum number of points in time for which quantitative analysis is required: 5; Maximum order of any cut or tie set: 10; Maximum order of a cut or tie of any

  7. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    Energy Technology Data Exchange (ETDEWEB)

    Dongiovanni, Danilo Nicola, E-mail: danilo.dongiovanni@enea.it [ENEA, Nuclear Fusion and Safety Technologies Department, via Enrico Fermi 45, Frascati 00040 (Italy); Iesmantas, Tomas [LEI, Breslaujos str. 3 Kaunas (Lithuania)

    2016-11-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  8. An approach to siting nuclear power plants: the relevance of earthquakes, faults and decision analysis

    International Nuclear Information System (INIS)

    Nair, K.; Brogan, G.E.; Cluff, L.S.; Idriss, I.M.; Mao, K.T.

    1975-01-01

    The regional approach to nuclear power plant siting described in this paper identifies candidate sites within the region and ranks these sites by using decision-analysis concepts. The approach uses exclusionary criteria to eliminate areas from consideration and to identify those areas which are most likely to contain candidate sites. These areas are then examined in greater detail to identify candidate sites, and the number of sites under consideration is reduced to a reasonably manageable number, approximately 15. These sites are then ranked using concepts of decision analysis. The exclusionary criteria applied relate primarily to regulatory-agency safety requirements and essential functional requirements. Examples of such criteria include proximity to population centres, presence of active faults, and the availability of cooling water. In many areas of the world, the presence of active faults and potential negative effects of earthquakes are dominant exclusionary criteria. To apply the 'active fault' criterion the region must be studied to locate and assess the activity of all potentially active faults. This requires complementary geologic (including geomorphic), historical, seismological, geodetic and geophysical investigations of the entire region. Site response studies or empirical attenuation correlations can be used to determine the relevant parameters of anticipated shaking from postulated earthquakes, and analytical testing and evaluation can be used to assess the potential extent of ground failure during an earthquake. After candidate sites are identified, an approach based on decision analysis is used to rank them. This approach uses the preferences and judgements of consumers, utility companies, the government, and other groups concerned with siting and licensing issues in the ranking process. Both subjective and objective factors are processed in a logical manner, as are the monetary and non-monetary factors and achievement of competing environmental

  9. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    International Nuclear Information System (INIS)

    Dongiovanni, Danilo Nicola; Iesmantas, Tomas

    2016-01-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  10. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    Science.gov (United States)

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  11. RADYBAN: A tool for reliability analysis of dynamic fault trees through conversion into dynamic Bayesian networks

    International Nuclear Information System (INIS)

    Montani, S.; Portinale, L.; Bobbio, A.; Codetta-Raiteri, D.

    2008-01-01

    In this paper, we present RADYBAN (Reliability Analysis with DYnamic BAyesian Networks), a software tool which allows to analyze a dynamic fault tree relying on its conversion into a dynamic Bayesian network. The tool implements a modular algorithm for automatically translating a dynamic fault tree into the corresponding dynamic Bayesian network and exploits classical algorithms for the inference on dynamic Bayesian networks, in order to compute reliability measures. After having described the basic features of the tool, we show how it operates on a real world example and we compare the unreliability results it generates with those returned by other methodologies, in order to verify the correctness and the consistency of the results obtained

  12. Fault analysis and strategy of high pulsed power supply for high power laser

    International Nuclear Information System (INIS)

    Liu Kefu; Qin Shihong; Li Jin; Pan Yuan; Yao Zonggan; Zheng Wanguo; Guo Liangfu; Zhou Peizhang; Li Yizheng; Chen Dehuai

    2001-01-01

    according to the requirements of driving flash-lamp, a high pulsed power supply (PPS) based on capacitors as energy storage elements is designed. The author analyzes in detail the faults of high pulsed power supply for high power laser. Such as capacitor internal short-circuit, main bus breakdown to ground, flashlamp sudden short or break. The fault current and voltage waveforms were given by circuit simulations. Based on the analysis and computation, the protection strategy with the fast fuse and ZnO was put forward, which can reduce the damage of PPS to the lower extent and provide the personnel safe and collateral property from the all threats. The preliminary experiments demonstrated that the design of the PPS can satisfy the project requirements

  13. Analysis of fault tolerance and reliability in distributed real-time system architectures

    International Nuclear Information System (INIS)

    Philippi, Stephan

    2003-01-01

    Safety critical real-time systems are becoming ubiquitous in many areas of our everyday life. Failures of such systems potentially have catastrophic consequences on different scales, in the worst case even the loss of human life. Therefore, safety critical systems have to meet maximum fault tolerance and reliability requirements. As the design of such systems is far from being trivial, this article focuses on concepts to specifically support the early architectural design. In detail, a simulation based approach for the analysis of fault tolerance and reliability in distributed real-time system architectures is presented. With this approach, safety related features can be evaluated in the early development stages and thus prevent costly redesigns in later ones

  14. Impossible meet-in-the-middle fault analysis on the LED lightweight cipher in VANETs

    DEFF Research Database (Denmark)

    Li, Wei; Rijmen, Vincent; Tao, Zhi

    2018-01-01

    With the expansion of wireless technology, vehicular ad-hoc networks (VANETs) are emerging as a promising approach for realizing smart cities and addressing many serious traffic problems, such as road safety, convenience, and efficiency. To avoid any possible rancorous attacks, employing lightwei......With the expansion of wireless technology, vehicular ad-hoc networks (VANETs) are emerging as a promising approach for realizing smart cities and addressing many serious traffic problems, such as road safety, convenience, and efficiency. To avoid any possible rancorous attacks, employing....... A detailed analysis of the expected number of faults is used to uniquely determine the secret key. It is based on the propagation of truncated differentials and is surprisingly reminiscent of the computation of the complexity of a rectangle attack. It shows that the impossible meet-in-the-middle fault...

  15. Analysis of a flux-coupling type superconductor fault current limiter with pancake coils

    Science.gov (United States)

    Liu, Shizhuo; Xia, Dong; Zhang, Zhifeng; Qiu, Qingquan; Zhang, Guomin

    2017-10-01

    The characteristics of a flux-coupling type superconductor fault current limiter (SFCL) with pancake coils are investigated in this paper. The conventional double-wound non-inductive pancake coil used in AC power systems has an inevitable defect in Voltage Sourced Converter Based High Voltage DC (VSC-HVDC) power systems. Due to its special structure, flashover would occur easily during the fault in high voltage environment. Considering the shortcomings of conventional resistive SFCLs with non-inductive coils, a novel flux-coupling type SFCL with pancake coils is carried out. The module connections of pancake coils are performed. The electromagnetic field and force analysis of the module are contrasted under different parameters. To ensure proper operation of the module, the impedance of the module under representative operating conditions is calculated. Finally, the feasibility of the flux-coupling type SFCL in VSC-HVDC power systems is discussed.

  16. Ground-Fault Characteristic Analysis of Grid-Connected Photovoltaic Stations with Neutral Grounding Resistance

    Directory of Open Access Journals (Sweden)

    Zheng Li

    2017-11-01

    Full Text Available A centralized grid-connected photovoltaic (PV station is a widely adopted method of neutral grounding using resistance, which can potentially make pre-existing protection systems invalid and threaten the safety of power grids. Therefore, studying the fault characteristics of grid-connected PV systems and their impact on power-grid protection is of great importance. Based on an analysis of the grid structure of a grid-connected PV system and of the low-voltage ride-through control characteristics of a photovoltaic power supply, this paper proposes a short-circuit calculation model and a fault-calculation method for this kind of system. With respect to the change of system parameters, particularly the resistance connected to the neutral point, and the possible impact on protective actions, this paper achieves the general rule of short-circuit current characteristics through a simulation, which provides a reference for devising protection configurations.

  17. Parallel Enhancements of the General Mission Analysis Tool, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The General Mission Analysis Tool (GMAT) is a state of the art spacecraft mission design tool under active development at NASA's Goddard Space Flight Center (GSFC)....

  18. Reliability Analysis of Core Protection Calculator System by Combining Petri Net and Fault Tree

    International Nuclear Information System (INIS)

    Kim, Hyejin; Kim, Jonghyun

    2013-01-01

    This paper proposes an approach to analyzing the reliability of digital systems by combining Petri net (PN) and Fault tree. The Petri net allows modeling event dependencies and interaction, to represent the time sequence, and to model assumptions for dynamic events. The Petri net model can be straightforwardly transformed to fault tree using the gate. Then, the FT can be integrated into the existing PSA. This paper applies the approach to the reliability analysis of Core Protection Calculator System (CPCS). Digital technology is replacing the analog instrumentation and control (I and C) systems in both new and upgraded nuclear power plants. As digital systems are introduced to nuclear power plants, issues related with reliability analyses of these digital systems are being raised. One of these issues is that static fault tree (FT) and event tree (ET) approach cannot properly account for dynamic interactions in the digital systems, such as multiple top events, logic loops and time delay. Many methods have been proposed to solve the problems, but there is no single method that is universally accepted for the application to the current generation probabilistic safety analysis (PSA)

  19. Reliability Analysis of Core Protection Calculator System by Combining Petri Net and Fault Tree

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyejin; Kim, Jonghyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    This paper proposes an approach to analyzing the reliability of digital systems by combining Petri net (PN) and Fault tree. The Petri net allows modeling event dependencies and interaction, to represent the time sequence, and to model assumptions for dynamic events. The Petri net model can be straightforwardly transformed to fault tree using the gate. Then, the FT can be integrated into the existing PSA. This paper applies the approach to the reliability analysis of Core Protection Calculator System (CPCS). Digital technology is replacing the analog instrumentation and control (I and C) systems in both new and upgraded nuclear power plants. As digital systems are introduced to nuclear power plants, issues related with reliability analyses of these digital systems are being raised. One of these issues is that static fault tree (FT) and event tree (ET) approach cannot properly account for dynamic interactions in the digital systems, such as multiple top events, logic loops and time delay. Many methods have been proposed to solve the problems, but there is no single method that is universally accepted for the application to the current generation probabilistic safety analysis (PSA)

  20. Gear fault diagnosis based on the structured sparsity time-frequency analysis

    Science.gov (United States)

    Sun, Ruobin; Yang, Zhibo; Chen, Xuefeng; Tian, Shaohua; Xie, Yong

    2018-03-01

    Over the last decade, sparse representation has become a powerful paradigm in mechanical fault diagnosis due to its excellent capability and the high flexibility for complex signal description. The structured sparsity time-frequency analysis (SSTFA) is a novel signal processing method, which utilizes mixed-norm priors on time-frequency coefficients to obtain a fine match for the structure of signals. In order to extract the transient feature from gear vibration signals, a gear fault diagnosis method based on SSTFA is proposed in this work. The steady modulation components and impulsive components of the defective gear vibration signals can be extracted simultaneously by choosing different time-frequency neighborhood and generalized thresholding operators. Besides, the time-frequency distribution with high resolution is obtained by piling different components in the same diagram. The diagnostic conclusion can be made according to the envelope spectrum of the impulsive components or by the periodicity of impulses. The effectiveness of the method is verified by numerical simulations, and the vibration signals registered from a gearbox fault simulator and a wind turbine. To validate the efficiency of the presented methodology, comparisons are made among some state-of-the-art vibration separation methods and the traditional time-frequency analysis methods. The comparisons show that the proposed method possesses advantages in separating feature signals under strong noise and accounting for the inner time-frequency structure of the gear vibration signals.

  1. Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.

    Science.gov (United States)

    Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin

    2018-02-01

    Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.

  2. Analysis of Parallel Burn Without Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn With Crossfeed and Series Burn Architectures

    Science.gov (United States)

    Smith, Garrett; Phillips, Alan

    2002-01-01

    There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of

  3. Nonlinear analysis of r.c. framed buildings retrofitted with elastomeric and friction bearings under near-fault earthquakes

    Science.gov (United States)

    Mazza, Mirko

    2015-12-01

    Reinforced concrete (r.c.) framed buildings designed in compliance with inadequate seismic classifications and code provisions present in many cases a high vulnerability and need to be retrofitted. To this end, the insertion of a base isolation system allows a considerable reduction of the seismic loads transmitted to the superstructure. However, strong near-fault ground motions, which are characterised by long-duration horizontal pulses, may amplify the inelastic response of the superstructure and induce a failure of the isolation system. The above considerations point out the importance of checking the effectiveness of different isolation systems for retrofitting a r.c. framed structure. For this purpose, a numerical investigation is carried out with reference to a six-storey r.c. framed building, which, primarily designed (as to be a fixed-base one) in compliance with the previous Italian code (DM96) for a medium-risk seismic zone, has to be retrofitted by insertion of an isolation system at the base for attaining performance levels imposed by the current Italian code (NTC08) in a high-risk seismic zone. Besides the (fixed-base) original structure, three cases of base isolation are studied: elastomeric bearings acting alone (e.g. HDLRBs); in-parallel combination of elastomeric and friction bearings (e.g. high-damping-laminated-rubber bearings, HDLRBs and steel-PTFE sliding bearings, SBs); friction bearings acting alone (e.g. friction pendulum bearings, FPBs). The nonlinear analysis of the fixed-base and base-isolated structures subjected to horizontal components of near-fault ground motions is performed for checking plastic conditions at the potential critical (end) sections of the girders and columns as well as critical conditions of the isolation systems. Unexpected high values of ductility demand are highlighted at the lower floors of all base-isolated structures, while re-centring problems of the base isolation systems under near-fault earthquakes are

  4. Non linear stability analysis of parallel channels with natural circulation

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Ashish Mani; Singh, Suneet, E-mail: suneet.singh@iitb.ac.in

    2016-12-01

    Highlights: • Nonlinear instabilities in natural circulation loop are studied. • Generalized Hopf points, Sub and Supercritical Hopf bifurcations are identified. • Bogdanov–Taken Point (BT Point) is observed by nonlinear stability analysis. • Effect of parameters on stability of system is studied. - Abstract: Linear stability analysis of two-phase flow in natural circulation loop is quite extensively studied by many researchers in past few years. It can be noted that linear stability analysis is limited to the small perturbations only. It is pointed out that such systems typically undergo Hopf bifurcation. If the Hopf bifurcation is subcritical, then for relatively large perturbation, the system has unstable limit cycles in the (linearly) stable region in the parameter space. Hence, linear stability analysis capturing only infinitesimally small perturbations is not sufficient. In this paper, bifurcation analysis is carried out to capture the non-linear instability of the dynamical system and both subcritical and supercritical bifurcations are observed. The regions in the parameter space for which subcritical and supercritical bifurcations exist are identified. These regions are verified by numerical simulation of the time-dependent, nonlinear ODEs for the selected points in the operating parameter space using MATLAB ODE solver.

  5. Analysis of the growth of strike-slip faults using effective medium theory

    Energy Technology Data Exchange (ETDEWEB)

    Aydin, A.; Berryman, J.G.

    2009-10-15

    Increases in the dimensions of strike-slip faults including fault length, thickness of fault rock and the surrounding damage zone collectively provide quantitative definition of fault growth and are commonly measured in terms of the maximum fault slip. The field observations indicate that a common mechanism for fault growth in the brittle upper crust is fault lengthening by linkage and coalescence of neighboring fault segments or strands, and fault rock-zone widening into highly fractured inner damage zone via cataclastic deformation. The most important underlying mechanical reason in both cases is prior weakening of the rocks surrounding a fault's core and between neighboring fault segments by faulting-related fractures. In this paper, using field observations together with effective medium models, we analyze the reduction in the effective elastic properties of rock in terms of density of the fault-related brittle fractures and fracture intersection angles controlled primarily by the splay angles. Fracture densities or equivalent fracture spacing values corresponding to the vanishing Young's, shear, and quasi-pure shear moduli were obtained by extrapolation from the calculated range of these parameters. The fracture densities or the equivalent spacing values obtained using this method compare well with the field data measured along scan lines across the faults in the study area. These findings should be helpful for a better understanding of the fracture density/spacing distribution around faults and the transition from discrete fracturing to cataclastic deformation associated with fault growth and the related instabilities.

  6. Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review

    Science.gov (United States)

    Liu, Wen-Lan; Xu, Yun-Dou; Yao, Jian-Tao; Zhao, Yong-Sheng

    2017-11-01

    The force analysis of overconstrained PMs is relatively complex and difficult, for which the methods have always been a research hotspot. However, few literatures analyze the characteristics and application scopes of the various methods, which is not convenient for researchers and engineers to master and adopt them properly. A review of the methods for force analysis of both passive and active overconstrained PMs is presented. The existing force analysis methods for these two kinds of overconstrained PMs are classified according to their main ideas. Each category is briefly demonstrated and evaluated from such aspects as the calculation amount, the comprehensiveness of considering limbs' deformation, and the existence of explicit expressions of the solutions, which provides an important reference for researchers and engineers to quickly find a suitable method. The similarities and differences between the statically indeterminate problem of passive overconstrained PMs and that of active overconstrained PMs are discussed, and a universal method for these two kinds of overconstrained PMs is pointed out. The existing deficiencies and development directions of the force analysis methods for overconstrained systems are indicated based on the overview.

  7. Human Factors Reliability Analysis for Assuring Nuclear Safety Using Fuzzy Fault Tree

    International Nuclear Information System (INIS)

    Eisawy, E.A.-F. I.; Sallam, H.

    2016-01-01

    In order to ensure effective prevention of harmful events, the risk assessment process cannot ignore the role of humans in the dynamics of accidental events and thus the seriousness of the consequences that may derive from them. Human reliability analysis (HRA) involves the use of qualitative and quantitative methods to assess the human contribution to risk. HRA techniques have been developed in order to provide human error probability values associated with operators’ tasks to be included within the broader context of system risk assessment, and are aimed at reducing the probability of accidental events. Fault tree analysis (FTA) is a graphical model that displays the various combinations of equipment failures and human errors that can result in the main system failure of interest. FTA is a risk analysis technique to assess likelihood (in a probabilistic context) of an event. The objective data available to estimate the likelihood is often missing, and even if available, is subject to incompleteness and imprecision or vagueness. Without addressing incompleteness and imprecision in the available data, FTA and subsequent risk analysis give a false impression of precision and correctness that undermines the overall credibility of the process. To solve this problem, qualitative justification in the context of failure possibilities can be used as alternative for quantitative justification. In this paper, we introduce the approach of fuzzy reliability as solution for fault tree analysis drawbacks. A new fuzzy fault tree method is proposed for the analysis of human reliability based on fuzzy sets and fuzzy operations t-norms, co-norms, defuzzification, and fuzzy failure probability. (author)

  8. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  9. TSimpleAnalysis: histogramming many trees in parallel

    CERN Document Server

    Giommi, Luca

    2016-01-01

    I worked inside the ROOT team of EP-SFT group. My project focuses on writing a ROOT class that has the aim of creating histograms from a TChain. The name of the class is TSimpleAnalysis and it is already integrated in ROOT. The work that I have done was to write the source, the header le of the class and also a python script, that allows to the user to use the class through the command line. This represents a great improvement respect to the usual user code that counts lines and lines of code to do the same thing. (Link for the class: https://root.cern.ch/doc/master/classTSimpleAnalysis.html)

  10. Comparative Study of Time-Frequency Decomposition Techniques for Fault Detection in Induction Motors Using Vibration Analysis during Startup Transient

    Directory of Open Access Journals (Sweden)

    Paulo Antonio Delgado-Arredondo

    2015-01-01

    Full Text Available Induction motors are critical components for most industries and the condition monitoring has become necessary to detect faults. There are several techniques for fault diagnosis of induction motors and analyzing the startup transient vibration signals is not as widely used as other techniques like motor current signature analysis. Vibration analysis gives a fault diagnosis focused on the location of spectral components associated with faults. Therefore, this paper presents a comparative study of different time-frequency analysis methodologies that can be used for detecting faults in induction motors analyzing vibration signals during the startup transient. The studied methodologies are the time-frequency distribution of Gabor (TFDG, the time-frequency Morlet scalogram (TFMS, multiple signal classification (MUSIC, and fast Fourier transform (FFT. The analyzed vibration signals are one broken rotor bar, two broken bars, unbalance, and bearing defects. The obtained results have shown the feasibility of detecting faults in induction motors using the time-frequency spectral analysis applied to vibration signals, and the proposed methodology is applicable when it does not have current signals and only has vibration signals. Also, the methodology has applications in motors that are not fed directly to the supply line, in such cases the analysis of current signals is not recommended due to poor current signal quality.

  11. Automatic supervision and fault detection of PV systems based on power losses analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chouder, A.; Silvestre, S. [Electronic Engineering Department, Universitat Politecnica de Catalunya, C/Jordi Girona 1-3, Campus Nord UPC, 08034 Barcelona (Spain)

    2010-10-15

    In this work, we present a new automatic supervision and fault detection procedure for PV systems, based on the power losses analysis. This automatic supervision system has been developed in Matlab and Simulink environment. It includes parameter extraction techniques to calculate main PV system parameters from monitoring data in real conditions of work, taking into account the environmental irradiance and module temperature evolution, allowing simulation of the PV system behaviour in real time. The automatic supervision method analyses the output power losses, presents in the DC side of the PV generator, capture losses. Two new power losses indicators are defined: thermal capture losses (L{sub ct}) and miscellaneous capture losses (L{sub cm}). The processing of these indicators allows the supervision system to generate a faulty signal as indicator of fault detection in the PV system operation. Two new indicators of the deviation of the DC variables respect to the simulated ones have been also defined. These indicators are the current and voltage ratios: R{sub C} and R{sub V}. Analysing both, the faulty signal and the current/voltage ratios, the type of fault can be identified. The automatic supervision system has been successfully tested experimentally. (author)

  12. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography

    Science.gov (United States)

    Kim, D.; Youn, J.; Kim, C.

    2017-08-01

    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  13. AUTOMATIC FAULT RECOGNITION OF PHOTOVOLTAIC MODULES BASED ON STATISTICAL ANALYSIS OF UAV THERMOGRAPHY

    Directory of Open Access Journals (Sweden)

    D. Kim

    2017-08-01

    Full Text Available As a malfunctioning PV (Photovoltaic cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle. The proposed algorithm uses statistical analysis of thermal intensity (surface temperature characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  14. Automatic supervision and fault detection of PV systems based on power losses analysis

    International Nuclear Information System (INIS)

    Chouder, A.; Silvestre, S.

    2010-01-01

    In this work, we present a new automatic supervision and fault detection procedure for PV systems, based on the power losses analysis. This automatic supervision system has been developed in Matlab and Simulink environment. It includes parameter extraction techniques to calculate main PV system parameters from monitoring data in real conditions of work, taking into account the environmental irradiance and module temperature evolution, allowing simulation of the PV system behaviour in real time. The automatic supervision method analyses the output power losses, presents in the DC side of the PV generator, capture losses. Two new power losses indicators are defined: thermal capture losses (L ct ) and miscellaneous capture losses (L cm ). The processing of these indicators allows the supervision system to generate a faulty signal as indicator of fault detection in the PV system operation. Two new indicators of the deviation of the DC variables respect to the simulated ones have been also defined. These indicators are the current and voltage ratios: R C and R V . Analysing both, the faulty signal and the current/voltage ratios, the type of fault can be identified. The automatic supervision system has been successfully tested experimentally.

  15. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  16. Analysis of Droop Controlled Parallel Inverters in Islanded Microgrids

    DEFF Research Database (Denmark)

    Mariani, Valerio; Vasca, Francesco; Guerrero, Josep M.

    2014-01-01

    Three-phase droop controlled inverters are widely used in islanded microgrids to interface distributed energy resources and to provide for the loads active and reactive powers demand. The assessment of microgrids stability, affected by the control and line parameters, is a stringent issue....... This paper shows a systematic approach to derive a closed loop model of the microgrid and then to perform an eigenvalues analysis that highlights how the system’s parameters affect the stability of the network. It is also shown that by means of a singular perturbation approach the resulting reduced order...

  17. Climate change and daily press : Italy vs Usa parallel analysis

    International Nuclear Information System (INIS)

    Borrelli, G.; Mazzotta, V.; Falconi, C.; Grossi, R.; Farabollini, F.

    1996-06-01

    Among ENEA (Italian National Agency for New Technologies, Energy, and the Environment) activities, one deals with analysis and strategies of environmental information. A survey of four daily newspaper coverage, on an issue (Global Climate Change) belonging to this area, has been realized. The involved newspapers are: two Italian ones, namely 'La Repubblica' and 'Il Corriere della Sera', two North-American ones, namely 'New York Times' and 'Washington Post'. Purpose of the work was that of detecting the qualitative and quantitative level of consciousness of the Italian press via a comparison with the North-American press, notoriously sensible and careful on environmental issues. The number of articled analyzed is partitioned in the following numerical data: 319 for the 'New York Times', 309 for the 'Washington Post', 146 for the 'Corriere della Sera', 81 articles for 'La Repubblica'. The time period covered for the analysis spans from 1989, initiatic year for the organization of the 1992 Rio Conference, to December 1994, deadline date for the submission of national

  18. Scientific data analysis on data-parallel platforms.

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, Craig D.; Bayer, Gregory W.; Choe, Yung Ryn; Roe, Diana C.

    2010-09-01

    As scientific computing users migrate to petaflop platforms that promise to generate multi-terabyte datasets, there is a growing need in the community to be able to embed sophisticated analysis algorithms in the computing platforms' storage systems. Data Warehouse Appliances (DWAs) are attractive for this work, due to their ability to store and process massive datasets efficiently. While DWAs have been utilized effectively in data-mining and informatics applications, they remain largely unproven in scientific workloads. In this paper we present our experiences in adapting two mesh analysis algorithms to function on five different DWA architectures: two Netezza database appliances, an XtremeData dbX database, a LexisNexis DAS, and multiple Hadoop MapReduce clusters. The main contribution of this work is insight into the differences between these DWAs from a user's perspective. In addition, we present performance measurements for ten DWA systems to help understand the impact of different architectural trade-offs in these systems.

  19. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-09-15

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor.

  20. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  1. Functional efficiency comparison between split- and parallel-hybrid using advanced energy flow analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Guttenberg, Philipp; Lin, Mengyan [Romax Technology, Nottingham (United Kingdom)

    2009-07-01

    The following paper presents a comparative efficiency analysis of the Toyota Prius versus the Honda Insight using advanced Energy Flow Analysis methods. The sample study shows that even very different hybrid concepts like a split- and a parallel-hybrid can be compared in a high level of detail and demonstrates the benefit showing exemplary results. (orig.)

  2. Modeling and Grid impedance Variation Analysis of Parallel Connected Grid Connected Inverter based on Impedance Based Harmonic Analysis

    DEFF Research Database (Denmark)

    Kwon, JunBum; Wang, Xiongfei; Bak, Claus Leth

    2014-01-01

    This paper addresses the harmonic compensation error problem existing with parallel connected inverter in the same grid interface conditions by means of impedance-based analysis and modeling. Unlike the single grid connected inverter, it is found that multiple parallel connected inverters and grid...... impedance can make influence to each other if they each have a harmonic compensation function. The analysis method proposed in this paper is based on the relationship between the overall output impedance and input impedance of parallel connected inverter, where controller gain design method, which can...

  3. Double-layer rotor magnetic shield performance analysis in high temperature superconducting synchronous generators under short circuit fault conditions

    Science.gov (United States)

    Hekmati, Arsalan; Aliahmadi, Mehdi

    2016-12-01

    High temperature superconducting, HTS, synchronous machines benefit from a rotor magnetic shield in order to protect superconducting coils against asynchronous magnetic fields. This magnetic shield, however, suffers from exerted Lorentz forces generated in light of induced eddy currents during transient conditions, e.g. stator windings short-circuit fault. In addition, to the exerted electromagnetic forces, eddy current losses and the associated effects on the cryogenic system are the other consequences of shielding HTS coils. This study aims at investigating the Rotor Magnetic Shield, RMS, performance in HTS synchronous generators under stator winding short-circuit fault conditions. The induced eddy currents in different circumferential positions of the rotor magnetic shield along with associated Joule heating losses would be studied using 2-D time-stepping Finite Element Analysis, FEA. The investigation of Lorentz forces exerted on the magnetic shield during transient conditions has also been performed in this paper. The obtained results show that double line-to-ground fault is of the most importance among different types of short-circuit faults. It was revealed that when it comes to the design of the rotor magnetic shields, in addition to the eddy current distribution and the associated ohmic losses, two phase-to-ground fault should be taken into account since the produced electromagnetic forces in the time of fault conditions are more severe during double line-to-ground fault.

  4. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  5. Parallelization and scheduling of data intensive particle physics analysis jobs on clusters of PCs

    CERN Document Server

    Ponce, S

    2004-01-01

    Summary form only given. Scheduling policies are proposed for parallelizing data intensive particle physics analysis applications on computer clusters. Particle physics analysis jobs require the analysis of tens of thousands of particle collision events, each event requiring typically 200ms processing time and 600KB of data. Many jobs are launched concurrently by a large number of physicists. At a first view, particle physics jobs seem to be easy to parallelize, since particle collision events can be processed independently one from another. However, since large amounts of data need to be accessed, the real challenge resides in making an efficient use of the underlying computing resources. We propose several job parallelization and scheduling policies aiming at reducing job processing times and at increasing the sustainable load of a cluster server. Since particle collision events are usually reused by several jobs, cache based job splitting strategies considerably increase cluster utilization and reduce job ...

  6. Bearing faults identification and resonant band demodulation based on wavelet de-noising methods and envelope analysis

    Science.gov (United States)

    Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali

    2017-07-01

    The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.

  7. Analysis of jacobian and singularity of planar parallel robots using screw theory

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jung Hyun; Lee, Jeh Won; Lee, Hyuk Jin [Yeungnam Univ., Gyeongsan (Korea, Republic of)

    2012-11-15

    The Jacobian and singularity analysis of parallel robots is necessary to analyze robot motion. The derivations of the Jacobian matrix and singularity configuration are complicated and have no geometrical earning in the velocity form of the Jacobian matrix. In this study, the screw theory is used to derive the Jacobian of parallel robots. The statics form of the Jacobian has a geometrical meaning. In addition, singularity analysis can be performed by using the geometrical values. Furthermore, this study shows that the screw theory is applicable to redundantly actuated robots as well as non redundant robots.

  8. Screw-System-Based Mobility Analysis of a Family of Fully Translational Parallel Manipulators

    Directory of Open Access Journals (Sweden)

    Ernesto Rodriguez-Leal

    2013-01-01

    Full Text Available This paper investigates the mobility of a family of fully translational parallel manipulators based on screw system analysis by identifying the common constraint and redundant constraints, providing a case study of this approach. The paper presents the branch motion-screws for the 3-RP̲C-Y parallel manipulator, the 3-RCC-Y (or 3-RP̲RC-Y parallel manipulator, and a newly proposed 3-RP̲C-T parallel manipulator. Then the paper determines the sets of platform constraint-screws for each of these three manipulators. The constraints exerted on the platforms of the 3-RP̲C architectures and the 3-RCC-Y manipulators are analyzed using the screw system approach and have been identified as couples. A similarity has been identified with the axes of couples: they are perpendicular to the R joint axes, but in the former the axes are coplanar with the base and in the latter the axes are perpendicular to the limb. The remaining couples act about the axis that is normal to the base. The motion-screw system and constraint-screw system analysis leads to the insightful understanding of the mobility of the platform that is then obtained by determining the reciprocal screws to the platform constraint screw sets, resulting in three independent instantaneous translational degrees-of-freedom. To validate the mobility analysis of the three parallel manipulators, the paper includes motion simulations which use a commercially available kinematics software.

  9. Study on Parallel Processing for Efficient Flexible Multibody Analysis based on Subsystem Synthesis Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jong-Boo; Song, Hajun; Kim, Sung-Soo [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-06-15

    Flexible multibody simulations are widely used in the industry to design mechanical systems. In flexible multibody dynamics, deformation coordinates are described either relatively in the body reference frame that is floating in the space or in the inertial reference frame. Moreover, these deformation coordinates are generated based on the discretization of the body according to the finite element approach. Therefore, the formulation of the flexible multibody system always deals with a huge number of degrees of freedom and the numerical solution methods require a substantial amount of computational time. Parallel computational methods are a solution for efficient computation. However, most of the parallel computational methods are focused on the efficient solution of large-sized linear equations. For multibody analysis, we need to develop an efficient formulation that could be suitable for parallel computation. In this paper, we developed a subsystem synthesis method for a flexible multibody system and proposed efficient parallel computational schemes based on the OpenMP API in order to achieve efficient computation. Simulations of a rotating blade system, which consists of three identical blades, were carried out with two different parallel computational schemes. Actual CPU times were measured to investigate the efficiency of the proposed parallel schemes.

  10. PC-based support programs coupled with the sets code for large fault tree analysis

    International Nuclear Information System (INIS)

    Hioki, K.; Nakai, R.

    1989-01-01

    Power Reactor and Nuclear Fuel Development Corporation (PNC) has developed four PC programs: IEIQ (Initiating Event Identification and Quantification), MODESTY (Modular Even Description for a Variety of Systems), FAUST (Fault Summary Tables Generation Program) and ETAAS (Event Tree Analysis Assistant System). These programs prepare the input data for the SETS (Set Equation Transformation System) code and construct and quantify event trees (E/Ts) using the output of the SETS code. The capability of these programs is described and some examples of the results are presented in this paper. With these PC programs and the SETS code, PSA can now be performed with more consistency and less manpower

  11. Failure mode analysis using state variables derived from fault trees with application

    International Nuclear Information System (INIS)

    Bartholomew, R.J.

    1982-01-01

    Fault Tree Analysis (FTA) is used extensively to assess both the qualitative and quantitative reliability of engineered nuclear power systems employing many subsystems and components. FTA is very useful, but the method is limited by its inability to account for failure mode rate-of-change interdependencies (coupling) of statistically independent failure modes. The state variable approach (using FTA-derived failure modes as states) overcomes these difficulties and is applied to the determination of the lifetime distribution function for a heat pipe-thermoelectric nuclear power subsystem. Analyses are made using both Monte Carlo and deterministic methods and compared with a Markov model of the same subsystem

  12. Failure mode and effects analysis and fault tree analysis of surface image guided cranial radiosurgery.

    Science.gov (United States)

    Manger, Ryan P; Paxton, Adam B; Pawlicki, Todd; Kim, Gwe-Ya

    2015-05-01

    Surface image guided, Linac-based radiosurgery (SIG-RS) is a modern approach for delivering radiosurgery that utilizes optical stereoscopic imaging to monitor the surface of the patient during treatment in lieu of using a head frame for patient immobilization. Considering the novelty of the SIG-RS approach and the severity of errors associated with delivery of large doses per fraction, a risk assessment should be conducted to identify potential hazards, determine their causes, and formulate mitigation strategies. The purpose of this work is to investigate SIG-RS using the combined application of failure modes and effects analysis (FMEA) and fault tree analysis (FTA), report on the effort required to complete the analysis, and evaluate the use of FTA in conjunction with FMEA. A multidisciplinary team was assembled to conduct the FMEA on the SIG-RS process. A process map detailing the steps of the SIG-RS was created to guide the FMEA. Failure modes were determined for each step in the SIG-RS process, and risk priority numbers (RPNs) were estimated for each failure mode to facilitate risk stratification. The failure modes were ranked by RPN, and FTA was used to determine the root factors contributing to the riskiest failure modes. Using the FTA, mitigation strategies were formulated to address the root factors and reduce the risk of the process. The RPNs were re-estimated based on the mitigation strategies to determine the margin of risk reduction. The FMEA and FTAs for the top two failure modes required an effort of 36 person-hours (30 person-hours for the FMEA and 6 person-hours for two FTAs). The SIG-RS process consisted of 13 major subprocesses and 91 steps, which amounted to 167 failure modes. Of the 91 steps, 16 were directly related to surface imaging. Twenty-five failure modes resulted in a RPN of 100 or greater. Only one of these top 25 failure modes was specific to surface imaging. The riskiest surface imaging failure mode had an overall RPN-rank of eighth

  13. State-space-based harmonic stability analysis for paralleled grid-connected inverters

    DEFF Research Database (Denmark)

    Wang, Yanbo; Wang, Xiongfei; Chen, Zhe

    2016-01-01

    This paper addresses a state-space-based harmonic stability analysis of paralleled grid-connected inverters system. A small signal model of individual inverter is developed, where LCL filter, the equivalent delay of control system, and current controller are modeled. Then, the overall small signal...... model of paralleled grid-connected inverters is built. Finally, the state space-based stability analysis approach is developed to explain the harmonic resonance phenomenon. The eigenvalue traces associated with time delay and coupled grid impedance are obtained, which accounts for how the unstable...... inverter produces the harmonic resonance and leads to the instability of whole paralleled system. The proposed approach reveals the contributions of the grid impedance as well as the coupled effect on other grid-connected inverters under different grid conditions. Simulation and experimental results...

  14. An investigation of the relations between fault tree analysis and cause consequence analysis with special reference to a photometry and conductimetry system

    International Nuclear Information System (INIS)

    Weber, G.

    1980-02-01

    For an automated photometry and conductimetry system, the relations between cause consequence analysis and fault tree analysis have been investigated. It has been shown how failure combinations of a cause consequence diagram and minimal cuts of a fault tree can be identified. This procedure allows a mutual control of fault tree analysis and cause consequence analysis. From a representation of all failure combinations of the system by means of a matrix we obtain a control of our analysis. Moreover, heuristic rules improving and simplifying the cause consequence analysis can be found. Necessary assumptions for the validity of these rules are discussed. Methodologically, the relation of a fault tree and a cause consequence diagram can be represented (under certain conditions) as a relation of a Boolean function and a binary decision tree. (orig.) 891 HP/orig. 892 MKO [de

  15. Interactive system design using the complementarity of axiomatic design and fault tree analysis

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Do, Sung Hee; Lee, Tae Sik

    2007-01-01

    To efficiently design safety-critical systems such as nuclear power plants, with requirement of high reliability, methodologies allowing for rigorous interactions between the synthesis and analysis processes have been proposed. This paper attempts to develop a reliability-centered design framework through an interactive process between Axiomatic Design (AD) and Fault Tree Analysis (FTA). Integrating AD and FTA into a single framework appears to be a viable solution, as they compliment each other with their unique advantages. AD provides a systematic synthesis tool while FTA is commonly used as a safety analysis tool. These methodologies build a design process that is less subjective, and they enable designers to develop insights that lead to solutions with improved reliability. Due to the nature of the two methodologies, the information involved in each process is complementary: a success tree versus a fault tree. Thus, at each step a system using AD is synthesized, and its reliability is then quantified using the FT derived from the AD synthesis process. The converted FT provides an opportunity to examine the completeness of the outcome from the synthesis process. This study presents an example of the design of a Containment Heat Removal System (CHRS). A case study illustrates the process of designing the CHRS with an interactive design framework focusing on the conversion of the AD process to FTA

  16. Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA

    Science.gov (United States)

    Gallina, B.; Haider, Z.; Carlsson, A.

    2018-05-01

    Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.

  17. Measurement and analysis of workload effects on fault latency in real-time systems

    Science.gov (United States)

    Woodbury, Michael H.; Shin, Kang G.

    1990-01-01

    The authors demonstrate the need to address fault latency in highly reliable real-time control computer systems. It is noted that the effectiveness of all known recovery mechanisms is greatly reduced in the presence of multiple latent faults. The presence of multiple latent faults increases the possibility of multiple errors, which could result in coverage failure. The authors present experimental evidence indicating that the duration of fault latency is dependent on workload. A synthetic workload generator is used to vary the workload, and a hardware fault injector is applied to inject transient faults of varying durations. This method makes it possible to derive the distribution of fault latency duration. Experimental results obtained from the fault-tolerant multiprocessor at the NASA Airlab are presented and discussed.

  18. Dynamic and Control Analysis of Modular Multi-Parallel Rectifiers (MMR)

    DEFF Research Database (Denmark)

    Zare, Firuz; Ghosh, Arindam; Davari, Pooya

    2017-01-01

    This paper presents dynamic analysis of a Modular Multi-Parallel Rectifier (MMR) based on state-space modelling and analysis. The proposed topology is suitable for high power application which can reduce line current harmonics emissions significantly. However, a proper controller is required...... to share and control current through each rectifier. Mathematical analysis and preliminary simulations have been carried out to verify the proposed controller under different operating conditions....

  19. Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.

    Science.gov (United States)

    1995-06-01

    DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth

  20. Calculation of critical fault recovery time for nonlinear systems based on region of attraction analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Blanke, Mogens

    2014-01-01

    of a system. It must be guaranteed that the trajectory of a system subject to fault remains in the region of attraction (ROA) of the post-fault system during this time. This paper proposes a new algorithm to compute the critical fault recovery time for nonlinear systems with polynomial vector elds using sum...

  1. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments

    NARCIS (Netherlands)

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode,

  2. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    National Research Council Canada - National Science Library

    Steed, Chad A; Fitzpatrick, Patrick J; Jankun-Kelly, T. J; Swan II, J. E

    2008-01-01

    ... for a particular dependent variable. These capabilities are combined into a unique visualization system that is demonstrated via a North Atlantic hurricane climate study using a systematic workflow. This research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.

  3. Stiffness analysis and comparison of a Biglide parallel grinder with alternative spatial modular parallelograms

    DEFF Research Database (Denmark)

    Wu, Guanglei; Zou, Ping

    2017-01-01

    This paper deals with the stiffness modeling, analysis and comparison of a Biglide parallel grinder with two alternative modular parallelograms. It turns out that the Cartesian stiffness matrix of the manipulator has the property that it can be decoupled into two homogeneous matrices, correspondi...

  4. Operation States Analysis of the Series-Parallel resonant Converter Working Above Resonance Frequency

    Directory of Open Access Journals (Sweden)

    Peter Dzurko

    2007-01-01

    Full Text Available Operation states analysis of a series-parallel converter working above resonance frequency is described in the paper. Principal equations are derived for individual operation states. On the basis of them the diagrams are made out. The diagrams give the complex image of the converter behaviour for individual circuit parameters. The waveforms may be utilised at designing the inverter individual parts.

  5. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    Science.gov (United States)

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  6. Sparse Probabilistic Parallel Factor Analysis for the Modeling of PET and Task-fMRI Data

    DEFF Research Database (Denmark)

    Beliveau, Vincent; Papoutsakis, Georgios; Hinrich, Jesper Løve

    2017-01-01

    Modern datasets are often multiway in nature and can contain patterns common to a mode of the data (e.g. space, time, and subjects). Multiway decomposition such as parallel factor analysis (PARAFAC) take into account the intrinsic structure of the data, and sparse versions of these methods improv...

  7. Operation Analysis of the Series-Parallel Resonant Converter Working above Resonance Frequency

    Directory of Open Access Journals (Sweden)

    Peter Dzurko

    2006-01-01

    Full Text Available The present article deals with theoretical analysis of operation of a series-parallel converter working above resonance frequency. Derived are principal equations for individual operation intervals. Based on these made out are waveforms of individual quantities during both the inverter operation at load and no-load operation. The waveforms may be utilised at designing the inverter individual parts.

  8. Developing a PC-based expert system for fault analysis of reactor instruments

    International Nuclear Information System (INIS)

    Diwakar, M.P.; Rathod, N.C.; Bairi, B.R.; Darbhe, M.D.; Joglekar, S.S.

    1989-01-01

    This paper describes the development of an expert system for fault analysis of electronic instruments in the CIRUS nuclear reactor. The system was developed in Prolog on an IBM PC-XT compatible computer. A 'model-based' approach (Button et al, 1986) was adopted combining 'frames' and 'rules' to provide flexible control over the inferencing mechanisms. Frames represent the domain-objects as well as the inter-object relationships. They include 'demons' or 'active values' for triggering actions. Rules, along with frames, are used for fault analysis. The rules can be activated either in a data-driven or a goal-driven manner. The use of frames makes rule management easier. It is felt that developing in-house shell proved advantageous, compared to using commercially available shells. Choosing the model-based approach was efficient compared to a production system architecture. Therefore, the use of hybrid representations for diagnostic applications is advocated. Based on the experience, some general recommendations for developing such systems are presented. The expert system helps novice operators to understand the process of diagnosis and achieve a significant required level of competence. The system may not achieve the required level of proficiency by itself, but it can be used to train operators to become experts. (author). 12 refs

  9. Fault detection of flywheel system based on clustering and principal component analysis

    Directory of Open Access Journals (Sweden)

    Wang Rixin

    2015-12-01

    Full Text Available Considering the nonlinear, multifunctional properties of double-flywheel with closed-loop control, a two-step method including clustering and principal component analysis is proposed to detect the two faults in the multifunctional flywheels. At the first step of the proposed algorithm, clustering is taken as feature recognition to check the instructions of “integrated power and attitude control” system, such as attitude control, energy storage or energy discharge. These commands will ask the flywheel system to work in different operation modes. Therefore, the relationship of parameters in different operations can define the cluster structure of training data. Ordering points to identify the clustering structure (OPTICS can automatically identify these clusters by the reachability-plot. K-means algorithm can divide the training data into the corresponding operations according to the reachability-plot. Finally, the last step of proposed model is used to define the relationship of parameters in each operation through the principal component analysis (PCA method. Compared with the PCA model, the proposed approach is capable of identifying the new clusters and learning the new behavior of incoming data. The simulation results show that it can effectively detect the faults in the multifunctional flywheels system.

  10. The completeness of fault tree analysis in the presence of dependencies

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-02-01

    Existing standard fault tree assessments of systems do not include an assessment of the effects of dependencies in an integrated fashion, but simply add on a ''common cause cut-off''. To support the values used for this cut-off, cut-sets involving certain groups of components susceptible to dependent failure can be assessed using the Distributed Failure Probability method. These rank one contributions do not cover all the possibilities, however, so there is an outstanding need for an integrated procedure for dependent failure assessment of systems which allows for all ranks of cut-set. The purpose of this note is to provide such a procedure which builds upon the standard approach to fault tree analysis. In this standard approach, only a limited number of cut-sets is found, and they are evaluated assuming independence of their components. So, some cut-sets are neglected which could be important contributors to the system failure probability if their components are not independent of each other. The procedure developed therefore deals with this truncation problem and with dependency together. The result is a practical and efficient method for bounding system failure probabilities. The method is a progressive one, whereby this bound is reduced as necessary by a more refined analysis. A simple example is used to illustrate the procedure. (author)

  11. Reliability analysis of component-level redundant topologies for solid-state fault current limiter

    Science.gov (United States)

    Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam

    2018-04-01

    Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.

  12. Multi-state system in a fault tree analysis of a nuclear based thermochemical hydrogen plant

    International Nuclear Information System (INIS)

    Zhang, Y.

    2008-01-01

    Nuclear-based hydrogen generation is a promising way to supply hydrogen for this large market in the future. This thesis focuses on one of the most promising methods, a thermochemical Cu-Cl cycle, which is currently under development by UOIT, Atomic Energy of Canada Limited (AECL) and the Argonne National Laboratory (ANL). The safety issues of the Cu-Cl cycle are addressed in this thesis. An investigation of major accident scenarios shows that potential tragedies can be avoided with effective risk analysis and safety management programs. As a powerful and systematic tool, fault tree analysis (FTA) is adapted to the particular needs of the Cu-Cl system. This thesis develops a new method that combines FTA with a reliability analysis tool, multi-state system (MSS), to improve the accuracy of FTA and also improve system reliability. (author)

  13. SALP-PC, a computer program for fault tree analysis on personal computers

    International Nuclear Information System (INIS)

    Contini, S.; Poucet, A.

    1987-01-01

    The paper presents the main characteristics of the SALP-PC computer code for fault tree analysis. The program has been developed in Fortran 77 on an Olivetti M24 personal computer (IBM compatible) in order to reach a high degree of portability. It is composed of six processors implementing the different phases of the analysis procedure. This particular structure presents some advantages like, for instance, the restart facility and the possibility to develop an event tree analysis code. The set of allowed logical operators, i.e. AND, OR, NOT, K/N, XOR, INH, together with the possibility to define boundary conditions, make the SALP-PC code a powerful tool for risk assessment. (orig.)

  14. Constraints on the stress state of the San Andreas Fault with analysis based on core and cuttings from San Andreas Fault Observatory at Depth (SAFOD) drilling phases 1 and 2

    Science.gov (United States)

    Tembe, S.; Lockner, D.; Wong, T.-F.

    2009-01-01

    Analysis of field data has led different investigators to conclude that the San Andreas Fault (SAF) has either anomalously low frictional sliding strength (?? 0.6). Arguments for the apparent weakness of the SAF generally hinge on conceptual models involving intrinsically weak gouge or elevated pore pressure within the fault zone. Some models assert that weak gouge and/or high pore pressure exist under static conditions while others consider strength loss or fluid pressure increase due to rapid coseismic fault slip. The present paper is composed of three parts. First, we develop generalized equations, based on and consistent with the Rice (1992) fault zone model to relate stress orientation and magnitude to depth-dependent coefficient of friction and pore pressure. Second, we present temperature-and pressure-dependent friction measurements from wet illite-rich fault gouge extracted from San Andreas Fault Observatory at Depth (SAFOD) phase 1 core samples and from weak minerals associated with the San Andreas Fault. Third, we reevaluate the state of stress on the San Andreas Fault in light of new constraints imposed by SAFOD borehole data. Pure talc (?????0.1) had the lowest strength considered and was sufficiently weak to satisfy weak fault heat flow and stress orientation constraints with hydrostatic pore pressure. Other fault gouges showed a systematic increase in strength with increasing temperature and pressure. In this case, heat flow and stress orientation constraints would require elevated pore pressure and, in some cases, fault zone pore pressure in excess of vertical stress. Copyright 2009 by the American Geophysical Union.

  15. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    Science.gov (United States)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  16. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation

    Science.gov (United States)

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230

  17. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    Directory of Open Access Journals (Sweden)

    Hazlee Azil Illias

    Full Text Available Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM with modified evolutionary particle swarm optimisation (EPSO algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO-Time Varying Acceleration Coefficient (TVAC technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  18. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    Science.gov (United States)

    Illias, Hazlee Azil; Zhao Liang, Wee

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  19. Validity of active fault identification through magnetic anomalous using earthquake mechanism, microgravity and topography structure analysis in Cisolok area

    Science.gov (United States)

    Setyonegoro, Wiko; Kurniawan, Telly; Ahadi, Suaidi; Rohadi, Supriyanto; Hardy, Thomas; Prayogo, Angga S.

    2017-07-01

    Research was conducted to determine the value of the magnetic anomalies to identify anomalous value standard fault, down or up with the type of Meratus trending northeast-southwest Cisolok, Sukabumi. Data collection was performed by setting the measurement grid at intervals of 5 meters distance measurement using a Precision Proton Magnetometer (PPM) -GSM-19T. To identification the active fault using magnetic is needed another parameter. The purpose of this study is to identification active fault using magnetic Anomaly in related with subsurface structure through the validation analysis of earthquake mechanism, microgravity and with Topography Structure in Java Island. Qualitative interpretation is done by analyzing the residual anomaly that has been reduced to the pole while the quantitative interpretation is done by analyzing the pattern of residual anomalies through computation. The results of quantitative interpretation, an anomalous value reduction to the pole magnetic field is at -700 nT to 700 nT while the results of the qualitative interpretation of the modeling of the path AA', BB' and CC' shows the magnetic anomaly at coordinates liquefaction resources with a value of 1028.04, 1416.21, - 1565, -1686.91. The measurement results obtained in Cisolok magnetic anomalies that indicate a high content of alumina (Al) and iron (Fe) which be identified appears through the fault gap towards the northeast through Rajamandala Lembang Fault related to the mechanism in the form of a normal fault with slip rate of 2 mm / year.

  20. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    Science.gov (United States)

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  1. Analysis and Modeling of Circulating Current in Two Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Gohil, Ghanshyamsinh Vijaysinh; Bede, Lorand

    2015-01-01

    Parallel-connected inverters are gaining attention for high power applications because of the limited power handling capability of the power modules. Moreover, the parallel-connected inverters may have low total harmonic distortion of the ac current if they are operated with the interleaved pulse...... this model, the circulating current between two parallel-connected inverters is analysed in this study. The peak and root mean square (rms) values of the normalised circulating current are calculated for different PWM methods, which makes this analysis a valuable tool to design a filter for the circulating......-width modulation (PWM). However, the interleaved PWM causes a circulating current between the inverters, which in turn causes additional losses. A model describing the dynamics of the circulating current is presented in this study which shows that the circulating current depends on the common-mode voltage. Using...

  2. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    Science.gov (United States)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  3. Analysis of gamma irradiator dose rate using spent fuel elements with parallel configuration

    International Nuclear Information System (INIS)

    Setiyanto; Pudjijanto MS; Ardani

    2006-01-01

    To enhance the utilization of the RSG-GAS reactor spent fuel, the gamma irradiator using spent fuel elements as a gamma source is a suitable choice. This irradiator can be used for food sterilization and preservation. The first step before realization, it is necessary to determine the gamma dose rate theoretically. The assessment was realized for parallel configuration fuel elements with the irradiation space can be placed between fuel element series. This analysis of parallel model was choice to compare with the circle model and as long as possible to get more space for irradiation and to do manipulation of irradiation target. Dose rate calculation were done with MCNP, while the estimation of gamma activities of fuel element was realized by OREGEN code with 1 year of average delay time. The calculation result show that the gamma dose rate of parallel model decreased up to 50% relatively compared with the circle model, but the value still enough for sterilization and preservation. Especially for food preservation, this parallel model give more flexible, while the gamma dose rate can be adjusted to the irradiation needed. The conclusion of this assessment showed that the utilization of reactor spent fuels for gamma irradiator with parallel model give more advantage the circle model. (author)

  4. Research of influence of open-winding faults on properties of brushless permanent magnets motor

    Science.gov (United States)

    Bogusz, Piotr; Korkosz, Mariusz; Powrózek, Adam; Prokop, Jan; Wygonik, Piotr

    2017-12-01

    The paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.

  5. Research of influence of open-winding faults on properties of brushless permanent magnets motor

    Directory of Open Access Journals (Sweden)

    Bogusz Piotr

    2017-12-01

    Full Text Available The paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.

  6. Analysis of Fault Permeability Using Mapping and Flow Modeling, Hickory Sandstone Aquifer, Central Texas

    Energy Technology Data Exchange (ETDEWEB)

    Nieto Camargo, Jorge E., E-mail: jorge.nietocamargo@aramco.com; Jensen, Jerry L., E-mail: jjensen@ucalgary.ca [University of Calgary, Department of Chemical and Petroleum Engineering (Canada)

    2012-09-15

    Reservoir compartments, typical targets for infill well locations, are commonly created by faults that may reduce permeability. A narrow fault may consist of a complex assemblage of deformation elements that result in spatially variable and anisotropic permeabilities. We report on the permeability structure of a km-scale fault sampled through drilling a faulted siliciclastic aquifer in central Texas. Probe and whole-core permeabilities, serial CAT scans, and textural and structural data from the selected core samples are used to understand permeability structure of fault zones and develop predictive models of fault zone permeability. Using numerical flow simulation, it is possible to predict permeability anisotropy associated with faults and evaluate the effect of individual deformation elements in the overall permeability tensor. We found relationships between the permeability of the host rock and those of the highly deformed (HD) fault-elements according to the fault throw. The lateral continuity and predictable permeability of the HD fault elements enhance capability for estimating the effects of subseismic faulting on fluid flow in low-shale reservoirs.

  7. Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan

    Directory of Open Access Journals (Sweden)

    Yu-Wen Chang

    2017-01-01

    Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.

  8. Parallel Hybrid Gas-Electric Geared Turbofan Engine Conceptual Design and Benefits Analysis

    Science.gov (United States)

    Lents, Charles; Hardin, Larry; Rheaume, Jonathan; Kohlman, Lee

    2016-01-01

    The conceptual design of a parallel gas-electric hybrid propulsion system for a conventional single aisle twin engine tube and wing vehicle has been developed. The study baseline vehicle and engine technology are discussed, followed by results of the hybrid propulsion system sizing and performance analysis. The weights analysis for the electric energy storage & conversion system and thermal management system is described. Finally, the potential system benefits are assessed.

  9. Near-surface clay authigenesis in exhumed fault rock of the Alpine Fault Zone (New Zealand); O-H-Ar isotopic, XRD and chemical analysis of illite and chlorite

    Science.gov (United States)

    Boles, Austin; Mulch, Andreas; van der Pluijm, Ben

    2018-06-01

    Exhumed fault rock of the central Alpine Fault Zone (South Island, New Zealand) shows extensive clay mineralization, and it has been the focus of recent research that aims to describe the evolution and frictional behavior of the fault. Using Quantitative X-ray powder diffraction, 40Ar/39Ar geochronology, hydrogen isotope (δD) geochemistry, and electron microbeam analysis, we constrain the thermal and fluid conditions of deformation that produced two predominant clay phases ubiquitous to the exposed fault damage zone, illite and chlorite. Illite polytype analysis indicates that most end-member illite and chlorite material formed in equilibrium with meteoric fluid (δD = -55 to -75‰), but two locations preserve a metamorphic origin of chlorite (δD = -36 to -45‰). Chlorite chemical geothermometry constrains crystal growth to T = 210-296 °C. Isotopic analysis also constrains illite growth to T < 100 °C, consistent with the mineralogy, with Ar ages <0.5 Ma. High geothermal gradients in the study area promoted widespread, near-surface mineralization, and limited the window of clay authigenesis in the Alpine Fault Zone to <5 km for chlorite and <2 km for illite. This implies a significant contrast between fault rock exposed at the surface and that at depth, and informs discussions about fault strength, clays and frictional behavior.

  10. Seismic investigation of the Kunlun Fault: Analysis of the INDEPTH IV 2-D active-source seismic dataset

    Science.gov (United States)

    Seelig, William George

    The Tibetan Plateau has experienced significant crustal thickening and deformation since the continental subduction and collision of the Asian and Indian plates in the Eocene. Deformation of the northern Tibetan Plateau is largely accommodated by strike-slip faulting. The Kunlun Fault is a 1000-km long strike-slip fault near the northern boundary of the Plateau that has experienced five magnitude 7.0 or greater earthquakes in the past 100 years and represents a major rheological boundary. Active-source, 2-D seismic reflection/refraction data, collected as part of project INDEPTH IV (International Deep Profiling of Tibet and the Himalaya, phase IV) in 2007, was used to examine the structure and the dip of the Kunlun fault. The INDEPTH IV data was acquired to better understand the tectonic evolution of the northeastern Tibetan Plateau, such as the far-field deformation associated with the continent-continent collision and the potential subduction of the Asian continent beneath northern Tibet. Seismic reflection common depth point (CDP) stacks were examined to look for reflectivity patterns that may be associated with faulting. A possible reflection from the buried North Kunlun Thrust (NKT) is identified at 18-21 km underneath the East Kunlun Mountains, with an estimated apparent dip of 15°S and thrusting to the north. Minimally-processed shot gathers were also inspected for reflections off near-vertical structures such as faults and information on first-order velocity structure. Shot offset and nearest receiver number to reflection was catalogued to increase confidence of picks. Reflections off the North Kunlun (NKF) and South Kunlun Faults (SKF) were identified and analyzed for apparent dip and subsurface geometry. Fault reflection analysis found that the North Kunlun Fault had an apparent dip of approximately 68ºS to an estimated depth of 5 km, while the South Kunlun Fault dipped at approximately 78ºN to an estimated 3.5 km depth. Constraints on apparent dip and

  11. The fault tree as a tool in safety analysis in nuclear power plants

    International Nuclear Information System (INIS)

    Waddington, J.G.; Wild, A.

    1981-01-01

    Modern safety analysis must be able to identify realistic failure modes based on realistic operation and system malfunction, demonstrate rigorously that adequate independence exists between a malfunctioning system and those other systems required to mitigate the effects of the malfunction, design adequate reliability into systems important to plant safety and to demonstrate rigorously that the design reliability is met in operation, and identify the realistic actions expected of the operator. Fault trees, which have proved to be a powerful tool to achieve these objectives, are inevitably large and must be computerized. However, the computerized system must be simple, must allow merging of branches developed independently, must provide for easy modification and the processing must be economical and easily accessible. A new system for displaying, plotting and analysing fault trees has been developed and implemented on a small computer at AECB to demonstrate the viability of the approach to designers, and to provide a tool to assess licensee's submissions on failure modes of support systems such as electrical, service water and air, and to assess reliability predictions for special safety systems. (author)

  12. A systematic fault tree analysis based on multi-level flow modeling

    International Nuclear Information System (INIS)

    Gofuku, Akio; Ohara, Ai

    2010-01-01

    The fault tree analysis (FTA) is widely applied for the safety evaluation of a large-scale and mission-critical system. Because the potential of the FTA, however, strongly depends on human skill of analyzers, problems are pointed out in (1) education and training, (2) unreliable quality, (3) necessity of expertise knowledge, and (4) update of FTA results after the reconstruction of a target system. To get rid of these problems, many techniques to systematize FTA activities by applying computer technologies have been proposed. However, these techniques only use structural information of a target system and do not use functional information that is one of important properties of an artifact. The principle of FTA is to trace comprehensively cause-effect relations from a top undesirable effect to anomaly causes. The tracing is similar to the causality estimation technique that the authors proposed to find plausible counter actions to prevent or to mitigate the undesirable behavior of plants based on the model by a functional modeling technique, Multilevel Flow Modeling (MFM). The authors have extended this systematic technique to construct a fault tree (FT). This paper presents an algorithm of systematic construction of FT based on MFM models and demonstrates the applicability of the extended technique by the FT construction result of a cooling plant of nitric acid. (author)

  13. Differential Fault Analysis on CLEFIA with 128, 192, and 256-Bit Keys

    Science.gov (United States)

    Takahashi, Junko; Fukunaga, Toshinori

    This paper describes a differential fault analysis (DFA) attack against CLEFIA. The proposed attack can be applied to CLEFIA with all supported keys: 128, 192, and 256-bit keys. DFA is a type of side-channel attack. This attack enables the recovery of secret keys by injecting faults into a secure device during its computation of the cryptographic algorithm and comparing the correct ciphertext with the faulty one. CLEFIA is a 128-bit blockcipher with 128, 192, and 256-bit keys developed by the Sony Corporation in 2007. CLEFIA employs a generalized Feistel structure with four data lines. We developed a new attack method that uses this characteristic structure of the CLEFIA algorithm. On the basis of the proposed attack, only 2 pairs of correct and faulty ciphertexts are needed to retrieve the 128-bit key, and 10.78 pairs on average are needed to retrieve the 192 and 256-bit keys. The proposed attack is more efficient than any previously reported. In order to verify the proposed attack and estimate the calculation time to recover the secret key, we conducted an attack simulation using a PC. The simulation results show that we can obtain each secret key within three minutes on average. This result shows that we can obtain the entire key within a feasible computational time.

  14. Application fo fault tree methodology in the risk analysis of complex systems

    International Nuclear Information System (INIS)

    Vasconcelos, V. de.

    1984-01-01

    This study intends to describe the fault tree methodology and apply it to risk assessment of complex facilities. In the methodology description, it has been attempted to provide all the pertinent basic information, pointing out its more important aspects like, for instance, fault tree construction, evaluation techniques and their use in risk and reliability assessment of a system. In view of their importance, topics like common mode failures, human errors, data bases used in the calculations, and uncertainty evaluation of the results, will be discussed separately, each one in a chapter. For the purpose of applying the methodology, it was necessary to implement computer codes normally used for this kind of analysis. The computer codes PREP, KITT and SAMPLE, written in FORTRAN IV, were chosen, due to their availability and to the fact that they have been used in important studies of the nuclear area, like Wash-1400. With these codes, the probability of occurence of excessive pressure in the main system of the component test loop - CTC, of CDTN, was evaluated. (Author) [pt

  15. An application of the fault tree analysis for the power system reliability estimation

    International Nuclear Information System (INIS)

    Volkanovski, A.; Cepin, M.; Mavko, B.

    2007-01-01

    The power system is a complex system with its main function to produce, transfer and provide consumers with electrical energy. Combinations of failures of components in the system can result in a failure of power delivery to certain load points and in some cases in a full blackout of power system. The power system reliability directly affects safe and reliable operation of nuclear power plants because the loss of offsite power is a significant contributor to the core damage frequency in probabilistic safety assessments of nuclear power plants. The method, which is based on the integration of the fault tree analysis with the analysis of the power flows in the power system, was developed and implemented for power system reliability assessment. The main contributors to the power system reliability are identified, both quantitatively and qualitatively. (author)

  16. Fault Analysis of Wind Turbines Based on Error Messages and Work Orders

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Larsen, Jesper Abildgaard; Stoustrup, Jakob

    2012-01-01

    describing the service performed at the individual turbines. The auto generated alarms are analysed by applying a cleaning procedure to identify the alarms related to components. A severity, occurrence, and detection analysis is performed on the work orders. The outcome of the two analyses are then compared......In this paper data describing the operation and maintenance of an offshore wind farm is presented and analysed. Two different sets of data is presented; the first is auto generated error messages from the Supervisory Control and Data Acquisition (SCADA) system, the other is the work orders...... to identify common fault types and areas where further data analysis would be beneficial for improving the operation and maintenance of wind turbines in the future....

  17. Determination of minimum sample size for fault diagnosis of automobile hydraulic brake system using power analysis

    Directory of Open Access Journals (Sweden)

    V. Indira

    2015-03-01

    Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.

  18. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper

  19. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    Science.gov (United States)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  20. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr

    2015-10-15

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.

  1. Decadal strain along creeping faults in the Needles District, Paradox Basin Utah determined with InSAR Time Series Analysis

    Science.gov (United States)

    Kravitz, K.; Furuya, M.; Mueller, K. J.

    2013-12-01

    The Needles District, in Canyonlands National Park in Utah exposes an array of actively creeping normal faults that accommodate gravity-driven extension above a plastically deforming substrate of evaporite deposits. Previous interferogram stacking and InSAR analysis of faults in the Needles District using 35 ERS satellite scenes from 1992 to 2002 showed line-of-sight deformation rates of ~1-2 mm/yr along active normal faults, with a wide strain gradient along the eastern margin of the deforming region. More rapid subsidence of ~2-2.5 mm/yr was also evident south of the main fault array across a broad platform bounded by the Colorado River and a single fault scarp to the south. In this study, time series analysis was performed on SAR scenes from Envisat, PALSAR, and ERS satellites ranging from 1992 to 2010 to expand upon previous results. Both persistent scatterer and small baseline methods were implemented using StaMPS. Preliminary results from Envisat data indicate equally distributed slip rates along the length of faults within the Needles District and very little subsidence in the broad region further southwest identified in previous work. A phase ramp that appears to be present within the initial interferograms creates uncertainty in the current analysis and future work is aimed at removing this artifact. Our new results suggest, however that a clear deformation signal is present along a number of large grabens in the northern part of the region at higher rates of up to 3-4 mm/yr. Little to no creep is evident along the single fault zone that bounds the southern Needles, in spite of the presence of a large and apparently active fault. This includes a segment of this fault that is instrumented by a creepmeter that yields slip rates on the order of ~1mm/yr. Further work using time series analysis and a larger sampling of SAR scenes will be used in an effort to determine why differences exist between previous and current work and to test mechanics-based modeling

  2. Physics Structure Analysis of Parallel Waves Concept of Physics Teacher Candidate

    International Nuclear Information System (INIS)

    Sarwi, S; Linuwih, S; Supardi, K I

    2017-01-01

    The aim of this research was to find a parallel structure concept of wave physics and the factors that influence on the formation of parallel conceptions of physics teacher candidates. The method used qualitative research which types of cross-sectional design. These subjects were five of the third semester of basic physics and six of the fifth semester of wave course students. Data collection techniques used think aloud and written tests. Quantitative data were analysed with descriptive technique-percentage. The data analysis technique for belief and be aware of answers uses an explanatory analysis. Results of the research include: 1) the structure of the concept can be displayed through the illustration of a map containing the theoretical core, supplements the theory and phenomena that occur daily; 2) the trend of parallel conception of wave physics have been identified on the stationary waves, resonance of the sound and the propagation of transverse electromagnetic waves; 3) the influence on the parallel conception that reading textbooks less comprehensive and knowledge is partial understanding as forming the structure of the theory. (paper)

  3. Constraints on the stress state of the San Andreas fault with analysis based on core and cuttings from SAFOD drilling phases I and II

    Science.gov (United States)

    Lockner, David A.; Tembe, Cheryl; Wong, Teng-fong

    2009-01-01

    Analysis of field data has led different investigators to conclude that the San Andreas Fault (SAF) has either anomalously low frictional sliding strength (m 0.6). Arguments for the apparent weakness of the SAF generally hinge on conceptual models involving intrinsically weak gouge or elevated pore pressure within the fault zone. Some models assert that weak gouge and/or high pore pressure exist under static conditions while others consider strength loss or fluid pressure increase due to rapid coseismic fault slip. The present paper is composed of three parts. First, we develop generalized equations, based on and consistent with the Rice (1992) fault zone model to relate stress orientation and magnitude to depth-dependent coefficient of friction and pore pressure. Second, we present temperature- and pressure-dependent friction measurements from wet illite-rich fault gouge extracted from San Andreas Fault Observatory at Depth (SAFOD) phase 1 core samples and from weak minerals associated with the San Andreas Fault. Third, we reevaluate the state of stress on the San Andreas Fault in light of new constraints imposed by SAFOD borehole data. Pure talc (m0.1) had the lowest strength considered and was sufficiently weak to satisfy weak fault heat flow and stress orientation constraints with hydrostatic pore pressure. Other fault gouges showed a systematic increase in strength with increasing temperature and pressure. In this case, heat flow and stress orientation constraints would require elevated pore pressure and, in some cases, fault zone pore pressure in excess of vertical stress.

  4. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  5. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  6. Fault Tree Analysis for Safety/Security Verification in Aviation Software

    Directory of Open Access Journals (Sweden)

    Andrew J. Kornecki

    2013-01-01

    Full Text Available The Next Generation Air Traffic Management system (NextGen is a blueprint of the future National Airspace System. Supporting NextGen is a nation-wide Aviation Simulation Network (ASN, which allows integration of a variety of real-time simulations to facilitate development and validation of the NextGen software by simulating a wide range of operational scenarios. The ASN system is an environment, including both simulated and human-in-the-loop real-life components (pilots and air traffic controllers. Real Time Distributed Simulation (RTDS developed at Embry Riddle Aeronautical University, a suite of applications providing low and medium fidelity en-route simulation capabilities, is one of the simulations contributing to the ASN. To support the interconnectivity with the ASN, we designed and implemented a dedicated gateway acting as an intermediary, providing logic for two-way communication and transfer messages between RTDS and ASN and storage for the exchanged data. It has been necessary to develop and analyze safety/security requirements for the gateway software based on analysis of system assets, hazards, threats and attacks related to ultimate real-life future implementation. Due to the nature of the system, the focus was placed on communication security and the related safety of the impacted aircraft in the simulation scenario. To support development of safety/security requirements, a well-established fault tree analysis technique was used. This fault tree model-based analysis, supported by a commercial tool, was a foundation to propose mitigations assuring the gateway system safety and security. 

  7. Analysis of the fault geometry of a Cenozoic salt-related fault close to the D-1 well, Danish North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Roenoe Clausen, O.; Petersen, K.; Korstgaard, A.

    1995-12-31

    A normal detaching fault in the Norwegian-Danish Basin around the D-1 well (the D-1 faults) has been mapped using seismic sections. The fault has been analysed in detail by constructing backstripped-decompacted sections across the fault, contoured displacement diagrams along the fault, and vertical displacement maps. The result shows that the listric D-1 fault follows the displacement patterns for blind normal faults. Deviations from the ideal displacement pattern is suggested to be caused by salt-movements, which is the main driving mechanisms for the faulting. Zechstein salt moves primarily from the hanging wall to the footwall and is superposed by later minor lateral flow beneath the footwall. Back-stripping of depth-converted and decompacted sections results in an estimation of the salt-surface and the shape of the fault through time. This procedure then enables a simple modelling of the hanging wall deformation using a Chevron model with hanging wall collapse along dipping surfaces. The modelling indicates that the fault follows the salt surface until the Middle Miocene after which the offset on the fault also may be accommodated along the Top Chalk surface. (au) 16 refs.

  8. Fault-Slip Data Analysis and Cover Versus Basement Fracture Patterns - Implications for Subsurface Technical Processes in Thuringia, Germany

    Science.gov (United States)

    Kasch, N.; Kley, J.; Navabpour, P.; Siegburg, M.; Malz, A.

    2014-12-01

    Recent investigations in Thuringia, Central Germany, focus on the potential for carbon sequestration, groundwater supply and geothermal energy. We report on the results of an integrated fault-slip data analysis to characterize the geometries and kinematics of systematic fractures in contrasting basement and cover rock lithologies. The lithostratigraphy of the area comprises locally exposed crystalline rocks and intermittently overlying Permian volcanic and clastic sedimentary rocks, together referred to as basement. A Late Permian sequence of evaporites, carbonates and shale constitutes the transition to the continuous sedimentary cover of Triassic age. Major NW-SE-striking fault zones and minor NNE-SSW-striking faults affect this stratigraphic succession. These characteristic narrow deforming areas ( 15 km) non-deforming areas suggesting localized zones of mechanical weakness, which can be confirmed by the frequent reactivation of single fault strands. Along the major fault zones, the basement and cover contain dominant inclined to sub-vertical NW-SE-striking fractures. These fractures indicate successive normal, dextral strike-slip and reverse senses of slip, evidencing events of NNE-SSW extension and contraction. Another system of mostly sub-vertical NNW-SSE- and NE-SW-striking conjugate strike-slip faults mainly developed within the cover implies NNE-SSW contraction and WNW-ESE extension. Earthquake focal mechanisms and in-situ stress measurements reveal a NW-SE trend for the modern SHmax. Nevertheless, fractures and fault-slip indicators are rare in the non-deforming areas, which characterizes Thuringia as a dual domain of (1) large unfractured areas and (2) narrow zones of high potential for technical applications. Our data therefore provide a basis for estimation of slip and dilation tendency of the contrasting fractures in the basement and cover under the present-day stress field, which must be taken into account for different subsurface technical

  9. Stability of tapered and parallel-walled dental implants: A systematic review and meta-analysis.

    Science.gov (United States)

    Atieh, Momen A; Alsabeeha, Nabeel; Duncan, Warwick J

    2018-05-15

    Clinical trials have suggested that dental implants with a tapered configuration have improved stability at placement, allowing immediate placement and/or loading. The aim of this systematic review and meta-analysis was to evaluate the implant stability of tapered dental implants compared to standard parallel-walled dental implants. Applying the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement, randomized controlled trials (RCTs) were searched for in electronic databases and complemented by hand searching. The risk of bias was assessed using the Cochrane Collaboration's Risk of Bias tool and data were analyzed using statistical software. A total of 1199 studies were identified, of which, five trials were included with 336 dental implants in 303 participants. Overall meta-analysis showed that tapered dental implants had higher implant stability values than parallel-walled dental implants at insertion and 8 weeks but the difference was not statistically significant. Tapered dental implants had significantly less marginal bone loss compared to parallel-walled dental implants. No significant differences in implant failure rate were found between tapered and parallel-walled dental implants. There is limited evidence to demonstrate the effectiveness of tapered dental implants in achieving greater implant stability compared to parallel-walled dental implants. Superior short-term results in maintaining peri-implant marginal bone with tapered dental implants are possible. Further properly designed RCTs are required to endorse the supposed advantages of tapered dental implants in immediate loading protocol and other complex clinical scenarios. © 2018 Wiley Periodicals, Inc.

  10. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  11. Progress in Analysis to Remote Sensed Thermal Abnormity with Fault Activity and Seismogenic Process

    Directory of Open Access Journals (Sweden)

    WU Lixin

    2017-10-01

    Full Text Available Research to the remote sensed thermal abnormity with fault activity and seismogenic process is a vital topic of the Earth observation and remote sensing application. It is presented that a systematic review on the international researches on the topic during the past 30 years, in the respects of remote sensing data applications, anomaly analysis methods, and mechanism understanding. Firstly, the outlines of remote sensing data applications are given including infrared brightness temperature, microwave brightness temperature, outgoing longwave radiation, and assimilated data from multiple earth observations. Secondly, three development phases are summarized as qualitative analysis based on visual interpretation, quantitative analysis based on image processing, and multi-parameter spatio-temporal correlation analysis. Thirdly, the theoretical hypotheses presented for the mechanism understanding are introduced including earth degassing, stress-induced heat, crustal rock battery conversion, latent heat release due to radon decay as well as multi-spheres coupling effect. Finally, three key directions of future research on this topic are proposed:anomaly recognizing by remote sensing monitoring and data analysis for typical tectonic activity areas; anomaly mechanism understanding based on earthquake-related earth system responses; spatio-temporal correlation analysis of air-based, space-based and ground-based stereoscopic observations.

  12. Risk management of PPP project in the preparation stage based on Fault Tree Analysis

    Science.gov (United States)

    Xing, Yuanzhi; Guan, Qiuling

    2017-03-01

    The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.

  13. Analisa Penyebab Keterlambatan Proyek Pembangunan Sidoarjo Town Square Menggunakan Metode Fault Tree Analysis (FTA

    Directory of Open Access Journals (Sweden)

    Ridhati Amalia

    2012-09-01

    Full Text Available Setiap proyek konstruksi pada umumnya  mempunyai rencana pelaksanaan dan jadwal  pelaksanaan tertentu, kapan pelaksanaan proyek  tersebut harus dimulai, kapan proyek tersebut harus diselesaikan,  bagaimana proyek tersebut akan dikerjakan, serta  bagaimana penyediaan sumber dayanya. Diharapkan dalam pelaksanaanya tidak terjadi keterlambatan karena keterlambatan yang terjadi akan mengakibatkan meningkatnya biaya proyek. Namun, dalam pelaksanaan proyek pembangunan Sidoarjo Town Square mengalami keterlambatan. Metode yang direncanakan dalam pembahasan untuk mengetahui faktor yang mempengaruhi terjadinya keterlambatan yaitu Metode Fault Tree  Analysis (FTA dan Method Obtain Cut Set (MOCUS. Didapatkan bahwa item pekerjaan yang mengalami keterlambatan yaitu: pekerjaan struktur GWT STP, pekerjaan finishing fasade dan canopy, dan pekerjaan atap. Dari hasil analisa FTA dari ketiga top event, didapatkan bahwa keterlambatan terjadi dikarenakan  perubahan desain serta perijinan, dimana keduanya akibat faktor penyebab keterlambatan dari pihak owner.

  14. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    Science.gov (United States)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  15. Analysis of Back-to-Back MMC for Medium Voltage Applications under Faulted Condition

    DEFF Research Database (Denmark)

    Bose, Anurag; Martins, Joäo Pedro Rodrigues; Chaudhary, Sanjay K.

    2017-01-01

    This paper analyzes a 10MW medium voltage Back-to-Back (BTB) Modular Multilevel Converter (MMC) without a DC-Link capacitor with halfbridge submodules. It focusses on the system behavior under single-line-to-ground (SLG) fault when there is no capacitor on the DC-Link.The fault current is compute...... to prevent DC overvoltages in the sub-modules during faults....

  16. Ten kilometer vertical Moho offset and shallow velocity contrast along the Denali fault zone from double-difference tomography, receiver functions, and fault zone head waves

    Science.gov (United States)

    Allam, A. A.; Schulte-Pelkum, V.; Ben-Zion, Y.; Tape, C.; Ruppert, N.; Ross, Z. E.

    2017-11-01

    We examine the structure of the Denali fault system in the crust and upper mantle using double-difference tomography, P-wave receiver functions, and analysis (spatial distribution and moveout) of fault zone head waves. The three methods have complementary sensitivity; tomography is sensitive to 3D seismic velocity structure but smooths sharp boundaries, receiver functions are sensitive to (quasi) horizontal interfaces, and fault zone head waves are sensitive to (quasi) vertical interfaces. The results indicate that the Mohorovičić discontinuity is vertically offset by 10 to 15 km along the central 600 km of the Denali fault in the imaged region, with the northern side having shallower Moho depths around 30 km. An automated phase picker algorithm is used to identify 1400 events that generate fault zone head waves only at near-fault stations. At shorter hypocentral distances head waves are observed at stations on the northern side of the fault, while longer propagation distances and deeper events produce head waves on the southern side. These results suggest a reversal of the velocity contrast polarity with depth, which we confirm by computing average 1D velocity models separately north and south of the fault. Using teleseismic events with M ≥ 5.1, we obtain 31,400 P receiver functions and apply common-conversion-point stacking. The results are migrated to depth using the derived 3D tomography model. The imaged interfaces agree with the tomography model, showing a Moho offset along the central Denali fault and also the sub-parallel Hines Creek fault, a suture zone boundary 30 km to the north. To the east, this offset follows the Totschunda fault, which ruptured during the M7.9 2002 earthquake, rather than the Denali fault itself. The combined results suggest that the Denali fault zone separates two distinct crustal blocks, and that the Totschunda and Hines Creeks segments are important components of the fault and Cretaceous-aged suture zone structure.

  17. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  18. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  19. A Massively Parallel Solver for the Mechanical Harmonic Analysis of Accelerator Cavities

    International Nuclear Information System (INIS)

    2015-01-01

    ACE3P is a 3D massively parallel simulation suite that developed at SLAC National Accelerator Laboratory that can perform coupled electromagnetic, thermal and mechanical study. Effectively utilizing supercomputer resources, ACE3P has become a key simulation tool for particle accelerator R and D. A new frequency domain solver to perform mechanical harmonic response analysis of accelerator components is developed within the existing parallel framework. This solver is designed to determine the frequency response of the mechanical system to external harmonic excitations for time-efficient accurate analysis of the large-scale problems. Coupled with the ACE3P electromagnetic modules, this capability complements a set of multi-physics tools for a comprehensive study of microphonics in superconducting accelerating cavities in order to understand the RF response and feedback requirements for the operational reliability of a particle accelerator. (auth)

  20. Parallel computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, D. C.; Murthy, D. V.

    1991-01-01

    Aeroelastic analysis is mult-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic analysis capability on a distributed-memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a three-dimensional unsteady aerodynamic model and a panel discretization. Efficiencies up to 85 percent are demonstrated using 32 processors. The effects of subtask ordering, problem size and network topology are presented. A comparison to results on a shared-memory computer indicates that higher speedup is achieved on the distributed-memory system.

  1. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  2. Application of improved degree of grey incidence analysis model in fault diagnosis of steam generator

    International Nuclear Information System (INIS)

    Zhao Xinwen; Ren Xin

    2014-01-01

    In order to further reduce the misoperation after the faults occurring of nuclear-powered system in marine, the model based on weighted degree of grey incidence of optimized entropy and fault diagnosis system are proposed, and some simulation experiments about the typical faults of steam generator of nuclear-powered system in marine are conducted. And the results show that the diagnosis system based on improved degree of grey incidence model is more stable and its conclusion is right, and can satisfy diagnosis in real time, and higher faults subjection degrees resolving power can be achieved. (authors)

  3. Reliability and mass analysis of dynamic power conversion systems with parallel or standby redundancy

    Science.gov (United States)

    Juhasz, Albert J.; Bloomfield, Harvey S.

    1987-01-01

    A combinatorial reliability approach was used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis was also performed, specifically for a 100-kWe nuclear Brayton power conversion system with parallel redundancy. Although this study was done for a reactor outlet temperature of 1100 K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  4. Reliability and mass analysis of dynamic power conversion systems with parallel of standby redundancy

    Science.gov (United States)

    Juhasz, A. J.; Bloomfield, H. S.

    1985-01-01

    A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  5. Numerical analysis of the stability of HTS power cable under fault current considering the gaps in the cable

    International Nuclear Information System (INIS)

    Fang, J.; Li, H.F.; Zhu, J.H.; Zhou, Z.N.; Li, Y.X.; Shen, Z.; Dong, D.L.; Yu, T.; Li, Z.M.; Qiu, M.

    2013-01-01

    Highlights: •The equivalent circuit equations and the heat balance equations were established. •The current distributions of the HTS cable under fault current were obtained. •The temperature curves of conductor layers under fault current were obtained. •The effect of the gap liquid nitrogen on the thermal characteristics was studied. -- Abstract: During the operation of a high temperature superconducting power cable in a real grid, the power cable can be impacted inevitably by large fault current. The study on current distribution and thermal characteristics in the cable under fault current is the foundation to analyze its stability. To analyze the operation situation of 110 kV/3 kA class superconducting cable under the fault current of 25 kA rms for 3 s, the equivalent circuit equations and heat balance equations were established. The current distribution curves and the temperature distribution curves were obtained. The liquid nitrogen which exists in the gaps of HTS cable was taken into consideration, the influence of gap liquid nitrogen on the thermal characteristics was investigated. The analysis results can be used to estimate the security and stability of the superconducting cable

  6. Numerical analysis of the stability of HTS power cable under fault current considering the gaps in the cable

    Energy Technology Data Exchange (ETDEWEB)

    Fang, J., E-mail: fangseer@sina.com [School of Electrical Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li, H.F. [School of Electrical Engineering, Beijing Jiaotong University, Beijing 100044 (China); Zhu, J.H.; Zhou, Z.N. [China Electric Power Research Institute, Beijing 100192 (China); Li, Y.X.; Shen, Z.; Dong, D.L.; Yu, T. [School of Electrical Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li, Z.M.; Qiu, M. [China Electric Power Research Institute, Beijing 100192 (China)

    2013-11-15

    Highlights: •The equivalent circuit equations and the heat balance equations were established. •The current distributions of the HTS cable under fault current were obtained. •The temperature curves of conductor layers under fault current were obtained. •The effect of the gap liquid nitrogen on the thermal characteristics was studied. -- Abstract: During the operation of a high temperature superconducting power cable in a real grid, the power cable can be impacted inevitably by large fault current. The study on current distribution and thermal characteristics in the cable under fault current is the foundation to analyze its stability. To analyze the operation situation of 110 kV/3 kA class superconducting cable under the fault current of 25 kA{sub rms} for 3 s, the equivalent circuit equations and heat balance equations were established. The current distribution curves and the temperature distribution curves were obtained. The liquid nitrogen which exists in the gaps of HTS cable was taken into consideration, the influence of gap liquid nitrogen on the thermal characteristics was investigated. The analysis results can be used to estimate the security and stability of the superconducting cable.

  7. Numerical Methods for the Analysis of Power Transformer Tank Deformation and Rupture Due to Internal Arcing Faults.

    Science.gov (United States)

    Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao

    2015-01-01

    Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3 MJ and a 6.3 MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers.

  8. Numerical Methods for the Analysis of Power Transformer Tank Deformation and Rupture Due to Internal Arcing Faults

    Science.gov (United States)

    Yan, Chenguang; Hao, Zhiguo; Zhang, Song; Zhang, Baohui; Zheng, Tao

    2015-01-01

    Power transformer rupture and fire resulting from an arcing fault inside the tank usually leads to significant security risks and serious economic loss. In order to reveal the essence of tank deformation or explosion, this paper presents a 3-D numerical computational tool to simulate the structural dynamic behavior due to overpressure inside transformer tank. To illustrate the effectiveness of the proposed method, a 17.3MJ and a 6.3MJ arcing fault were simulated on a real full-scale 360MVA/220kV oil-immersed transformer model, respectively. By employing the finite element method, the transformer internal overpressure distribution, wave propagation and von-Mises stress were solved. The numerical results indicate that the increase of pressure and mechanical stress distribution are non-uniform and the stress tends to concentrate on connecting parts of the tank as the fault time evolves. Given this feature, it becomes possible to reduce the risk of transformer tank rupture through limiting the fault energy and enhancing the mechanical strength of the local stress concentrative areas. The theoretical model and numerical simulation method proposed in this paper can be used as a substitute for risky and costly field tests in fault overpressure analysis and tank mitigation design of transformers. PMID:26230392

  9. Fault identification in crystalline silicon PV modules by complementary analysis of the light and dark current-voltage characteristics

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Sera, Dezso; Hacke, Peter

    2016-01-01

    This article proposes a fault identification method, based on the complementary analysis of the light and dark current-voltage (I-V) characteristics of the photovoltaic (PV) module, to distinguish between four important degradation modes that lead to power loss in PV modules: (a) degradation of t...

  10. FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.

    Science.gov (United States)

    A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...

  11. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  12. Structure of Suasselkä Postglacial Fault in northern Finland obtained by analysis of ambient seismic noise

    Science.gov (United States)

    Afonin, Nikita; Kozlovskaya, Elena

    2016-04-01

    Understanding inner structure of seismogenic faults and their ability to reactivate is particularly important in investigating the continental intraplate seismicity regime. In our study we address this problem using analysis of ambient seismic noise recorded by the temporary DAFNE array in northern Fennoscandian Shield. The main purpose of the DAFNE/FINLAND passive seismic array experiment was to characterize the present-day seismicity of the Suasselkä post-glacial fault (SPGF) that was proposed as one potential target for the DAFNE (Drilling Active Faults in Northern Europe) project. The DAFNE/FINLAND array comprised the area of about 20 to 100 km and consisted of 8 short-period and 4 broad-band 3-component autonomous seismic stations installed in the close vicinity of the fault area. The array recorded continuous seismic data during September, 2011-May, 2013. Recordings of the array have being analyzed in order to identify and locate natural earthquakes from the fault area and to discriminate them from the blasts in the Kittilä Gold Mine. As a result, we found several dozens of natural seismic events originating from the fault area, which proves that the fault is still seismically active. In order to study the inner structure of the SPGF we use cross-correlation of ambient seismic noise recorded by the array. Analysis of azimuthal distribution of noise sources demonstrated that that during the time interval under consideration the distribution of noise sources is close to the uniform one. The continuous data were processed in several steps including single station data analysis, instrument response removal and time-domain stacking. The data were used to estimate empirical Green's functions between pairs of stations in the frequency band of 0.1-1 Hz and to calculate correspondent surface wave dispersion curves. After that S-wave velocity models were obtained as a result of dispersion curves inversion using Geopsy software. The results suggest that the area of

  13. Advanced mathematical on-line analysis in nuclear experiments. Usage of parallel computing CUDA routines in standard root analysis

    Science.gov (United States)

    Grzeszczuk, A.; Kowalski, S.

    2015-04-01

    Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.

  14. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  15. Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2

    Science.gov (United States)

    2016-06-01

    specification of fault propagation in EMV2 corresponds to the Fault Propagation and Transformation Calculus (FPTC) [Paige 2009]. The following concepts...definition of security includes acci- dental malicious indication of anomalous behavior either from outside a system or by unauthor- ized crossing of a

  16. Modularization of fault trees: a method to reduce the cost of analysis

    International Nuclear Information System (INIS)

    Chatterjee, P.

    1975-01-01

    The problem of analyzing large fault trees is considered. The concept of the finest modular representation of a fault tree is introduced and an algorithm is presented for finding this representation. The algorithm will also identify trees which cannot be modularized. Applications of such modularizations are discussed

  17. Robust fault detection in bond graph framework using interval analysis and Fourier-Motzkin elimination technique

    Science.gov (United States)

    Jha, Mayank Shekhar; Chatti, Nizar; Declerck, Philippe

    2017-09-01

    This paper addresses the fault diagnosis problem of uncertain systems in the context of Bond Graph modelling technique. The main objective is to enhance the fault detection step based on Interval valued Analytical Redundancy Relations (named I-ARR) in order to overcome the problems related to false alarms, missed alarms and robustness issues. These I-ARRs are a set of fault indicators that generate the interval bounds called thresholds. A fault is detected once the nominal residuals (point valued part of I-ARRs) exceed the thresholds. However, the existing fault detection method is limited to parametric faults and it presents various limitations with regards to estimation of measurement signal derivatives, to which I-ARRs are sensitive. The novelties and scientific interest of the proposed methodology are: (1) to improve the accuracy of the measurements derivatives estimation by using a dedicated sliding mode differentiator proposed in this work, (2) to suitably integrate the Fourier-Motzkin Elimination (FME) technique within the I-ARRs based diagnosis so that measurements faults can be detected successfully. The latter provides interval bounds over the derivatives which are included in the thresholds. The proposed methodology is studied under various scenarios (parametric and measurement faults) via simulations over a mechatronic torsion bar system.

  18. Wind Turbine Fault Detection based on Artificial Neural Network Analysis of SCADA Data

    DEFF Research Database (Denmark)

    Herp, Jürgen; S. Nadimi, Esmaeil

    2015-01-01

    Slowly developing faults in wind turbine can, when not detected and fixed on time, cause severe damage and downtime. We are proposing a fault detection method based on Artificial Neural Networks (ANN) and the recordings from Supervisory Control and Data Acquisition (SCADA) systems installed in wind...

  19. Comparative analysis of neural network and regression based condition monitoring approaches for wind turbine fault detection

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2011-01-01

    This paper presents the research results of a comparison of three different model based approaches for wind turbine fault detection in online SCADA data, by applying developed models to five real measured faults and anomalies. The regression based model as the simplest approach to build a normal...

  20. Geometric analysis of alternative models of faulting at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Young, S.R.; Stirewalt, G.L.; Morris, A.P.

    1993-01-01

    Realistic cross section tectonic models must be retrodeformable to geologically reasonable pre-deformation states. Furthermore, it must be shown that geologic structures depicted on cross section tectonic models can have formed by kinematically viable deformation mechanisms. Simple shear (i.e., listric fault models) is consistent with extensional geologic structures and fault patterns described at Yucca Mountain, Nevada. Flexural slip models yield results similar to oblique simple shear mechanisms, although there is no strong geological evidence for flexural slip deformation. Slip-line deformation is shown to generate fault block geometrics that are a close approximation to observed fault block structures. However, slip-line deformation implies a degree of general ductility for which there is no direct geological evidence. Simple and hybrid 'domino' (i.e., planar fault) models do not adequately explain observed variations of fault block dip or the development of 'rollover' folds adjacent to major bounding faults. Overall tectonic extension may be underestimated because of syn-tectonic deposition (growth faulting) of the Tertiary pyroclastic rocks that comprise Yucca Mountain. A strong diagnostic test of the applicability of the domino model may be provided by improved knowledge of Tertiary volcanic stratigraphy

  1. Open-Phase Fault Tolerance Techniques of Five-Phase Dual-Rotor Permanent Magnet Synchronous Motor

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    2015-11-01

    Full Text Available Multi-phase motors are gaining more attention due to the advantages of good fault tolerance capability and high power density, etc. By applying dual-rotor technology to multi-phase machines, a five-phase dual-rotor permanent magnet synchronous motor (DRPMSM is researched in this paper to further promote their torque density and fault tolerance capability. It has two rotors and two sets of stator windings, and it can adopt a series drive mode or parallel drive mode. The fault-tolerance capability of the five-phase DRPMSM is researched. All open circuit fault types and corresponding fault tolerance techniques in different drive modes are analyzed. A fault-tolerance control strategy of injecting currents containing a certain third harmonic component is proposed for five-phase DRPMSM to ensure performance after faults in the motor or drive circuit. For adjacent double-phase faults in the motor, based on where the additional degrees of freedom are used, two different fault-tolerance current calculation schemes are adopted and the torque results are compared. Decoupling of the inner motor and outer motor is investigated under fault-tolerant conditions in parallel drive mode. The finite element analysis (FMA results and co-simulation results based on Simulink-Simplorer-Maxwell verify the effectiveness of the techniques.

  2. Fault tree graphics

    International Nuclear Information System (INIS)

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  3. cudaBayesreg: Parallel Implementation of a Bayesian Multilevel Model for fMRI Data Analysis

    Directory of Open Access Journals (Sweden)

    Adelino R. Ferreira da Silva

    2011-10-01

    Full Text Available Graphic processing units (GPUs are rapidly gaining maturity as powerful general parallel computing devices. A key feature in the development of modern GPUs has been the advancement of the programming model and programming tools. Compute Unified Device Architecture (CUDA is a software platform for massively parallel high-performance computing on Nvidia many-core GPUs. In functional magnetic resonance imaging (fMRI, the volume of the data to be processed, and the type of statistical analysis to perform call for high-performance computing strategies. In this work, we present the main features of the R-CUDA package cudaBayesreg which implements in CUDA the core of a Bayesian multilevel model for the analysis of brain fMRI data. The statistical model implements a Gibbs sampler for multilevel/hierarchical linear models with a normal prior. The main contribution for the increased performance comes from the use of separate threads for fitting the linear regression model at each voxel in parallel. The R-CUDA implementation of the Bayesian model proposed here has been able to reduce significantly the run-time processing of Markov chain Monte Carlo (MCMC simulations used in Bayesian fMRI data analyses. Presently, cudaBayesreg is only configured for Linux systems with Nvidia CUDA support.

  4. Quantitative analysis of pulmonary perfusion using time-resolved parallel 3D MRI - initial results

    International Nuclear Information System (INIS)

    Fink, C.; Buhmann, R.; Plathow, C.; Puderbach, M.; Kauczor, H.U.; Risse, F.; Ley, S.; Meyer, F.J.

    2004-01-01

    Purpose: to assess the use of time-resolved parallel 3D MRI for a quantitative analysis of pulmonary perfusion in patients with cardiopulmonary disease. Materials and methods: eight patients with pulmonary embolism or pulmonary hypertension were examined with a time-resolved 3D gradient echo pulse sequence with parallel imaging techniques (FLASH 3D, TE/TR: 0.8/1.9 ms; flip angle: 40 ; GRAPPA). A quantitative perfusion analysis based on indicator dilution theory was performed using a dedicated software. Results: patients with pulmonary embolism or chronic thromboembolic pulmonary hypertension revealed characteristic wedge-shaped perfusion defects at perfusion MRI. They were characterized by a decreased pulmonary blood flow (PBF) and pulmonary blood volume (PBV) and increased mean transit time (MTT). Patients with primary pulmonary hypertension or eisenmenger syndrome showed a more homogeneous perfusion pattern. The mean MTT of all patients was 3.3 - 4.7 s. The mean PBF and PBV showed a broader interindividual variation (PBF: 104-322 ml/100 ml/min; PBV: 8 - 21 ml/100 ml). Conclusion: time-resolved parallel 3D MRI allows at least a semi-quantitative assessment of lung perfusion. Future studies will have to assess the clinical value of this quantitative information for the diagnosis and management of cardiopulmonary disease. (orig.) [de

  5. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  6. Numerical analysis of the effects induced by normal faults and dip angles on rock bursts

    Science.gov (United States)

    Jiang, Lishuai; Wang, Pu; Zhang, Peipeng; Zheng, Pengqiang; Xu, Bin

    2017-10-01

    The study of mining effects under the influences of a normal fault and its dip angle is significant for the prediction and prevention of rock bursts. Based on the geological conditions of panel 2301N in a coalmine, the evolution laws of the strata behaviors of the working face affected by a fault and the instability of the fault induced by mining operations with the working face of the footwall and hanging wall advancing towards a normal fault are studied using UDEC numerical simulation. The mechanism that induces rock burst is revealed, and the influence characteristics of the fault dip angle are analyzed. The results of the numerical simulation are verified by conducting a case study regarding the microseismic events. The results of this study serve as a reference for the prediction of rock bursts and their classification into hazardous areas under similar conditions.

  7. CRISP. Fault detection, analysis and diagnostics in high-DG distribution systems

    International Nuclear Information System (INIS)

    Fontela, M.; Bacha, S.; Hadsjaid, N.; Andrieu, C.; Raison, B.; Penkov, D.

    2004-04-01

    The fault in the electrotechnical meaning is defined in the document. The main part of faults in overhead lines are non permanent faults, what entails the network operator to maintain the existing techniques to clear as fast as possible these faults. When a permanent fault occurs the operator has to detect and to limit the risks as soon as possible. Different axes are followed: limitation of the fault current, clearing the faulted feeder, locating the fault by test and try under possible fault condition. So the fault detection, fault clearing and fault localization are important functions of an EPS (electric power systems) to allow secure and safe operation of the system. The function may be improved by means of a better use of ICT components in the future sharing conveniently the intelligence needed near the distributed devices and a defined centralized intelligence. This improvement becomes necessary in distribution EPS with a high introduction of DR (distributed resources). The transmission and sub-transmission protection systems are already installed in order to manage power flow in all directions, so the DR issue is less critical for this part of the power system in term of fault clearing and diagnosis. Nevertheless the massive introduction of RES involves another constraints to the transmission system which are the bottlenecks caused by important local and fast installed production as wind power plants. Dealing with the distribution power system, and facing a permanent fault, two main actions must be achieved: identify the faulted elementary EPS area quickly and allow the field crew to locate and to repair the fault as soon as possible. The introduction of DR in distribution EPS involves some changes in fault location methods or equipment. The different existing neutral grounding systems make it difficult the achievement of a general method relevant for any distribution EPS in Europe. Some solutions are studied in the CRISP project in order to improve the

  8. Design and Analysis of Cooperative Cable Parallel Manipulators for Multiple Mobile Cranes

    Directory of Open Access Journals (Sweden)

    Bin Zi

    2012-11-01

    Full Text Available The design, dynamic modelling, and workspace are presented in this paper concerning cooperative cable parallel manipulators for multiple mobile cranes (CPMMCs. The CPMMCs can handle complex tasks that are more difficult or even impossible for a single mobile crane. Kinematics and dynamics of the CPMMCs are studied on the basis of geometric methodology and d'Alembert's principle, and a mathematical model of the CPMMCs is developed and presented with dynamic simulation. The constant orientation workspace analysis of the CPMMCs is carried out additionally. As an example, a cooperative cable parallel manipulator for triple mobile cranes with 6 Degrees of Freedom is investigated on the basis of the above design objectives.

  9. Instantaneous Kinematics Analysis via Screw-Theory of a Novel 3-CRC Parallel Mechanism

    Directory of Open Access Journals (Sweden)

    Hussein de la Torre

    2016-06-01

    Full Text Available This paper presents the mobility and kinematics analysis of a novel parallel mechanism that is composed by one base, one platform and three identical limbs with CRC joints. The paper obtains closed-form solutions to the direct and inverse kinematics problems, and determines the mobility of the mechanism and instantaneous kinematics by applying screw theory. The obtained results show that this parallel robot is part of the family 2R1T, since the platform shows 3 DOF, i.e.: one translation perpendicular to the base and two rotations about skew axes. In order to calculate the direct instantaneous kinematics, this paper introduces the vector mh, which is part of the joint velocity vector that multiplies the overall inverse Jacobian matrix. This paper compares the results between simulations and numerical examples using Mathematica and SolidWorks in order to prove the accuracy of the analytical results.

  10. An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems

    KAUST Repository

    Kuwahara, Hiroyuki

    2011-01-01

    Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.

  11. Fault tree analysis of Project S-4404, Upgrade Canyon Exhaust System

    International Nuclear Information System (INIS)

    Browne, E.V.; Low, J.M.; Lux, C.R.

    1992-01-01

    Project S-4404, Upgrade Canyon Exhaust Systems, is a $177 million project with the purpose of upgrading the Exhaust Systems for both F and H Canyon Facilities. This upgrade will replace major portions of the F and H-Canyon exhaust systems, downstream of their respective sand filters with higher capacity and more reliable systems. Because of the high cost, DOE requested Program Control ampersand Integration (PC ampersand I) to examine specific deletions to the project. PC ampersand I requested Nuclear Processes Safety Research (NPSR) to perform an analysis to compare failure rates for the existing F ampersand H Canyon exhaust systems with the proposed exhaust system and specific proposed exhaust system alternatives. The objective of this work was to perform an analysis and compare failure rates for the existing F ampersand H Canyon exhaust systems with the proposed project exhaust system and proposed project alternatives. Based on fault tree analysis, two conclusions are made. First, D ampersand D activities can be eliminated from the project with no significant decrease to exhaust system safety. Deletion of D ampersand D activities would result in a cost savings of $29 million. Second, deletion of DOE Order 6430.1A requirements regarding DBAs would decrease exhaust system safety by a factor of 12

  12. Derailment-based Fault Tree Analysis on Risk Management of Railway Turnout Systems

    Science.gov (United States)

    Dindar, Serdar; Kaewunruen, Sakdirat; An, Min; Gigante-Barrera, Ángel

    2017-10-01

    Railway turnouts are fundamental mechanical infrastructures, which allow a rolling stock to divert one direction to another. As those are of a large number of engineering subsystems, e.g. track, signalling, earthworks, these particular sub-systems are expected to induce high potential through various kind of failure mechanisms. This could be a cause of any catastrophic event. A derailment, one of undesirable events in railway operation, often results, albeit rare occurs, in damaging to rolling stock, railway infrastructure and disrupt service, and has the potential to cause casualties and even loss of lives. As a result, it is quite significant that a well-designed risk analysis is performed to create awareness of hazards and to identify what parts of the systems may be at risk. This study will focus on all types of environment based failures as a result of numerous contributing factors noted officially as accident reports. This risk analysis is designed to help industry to minimise the occurrence of accidents at railway turnouts. The methodology of the study relies on accurate assessment of derailment likelihood, and is based on statistical multiple factors-integrated accident rate analysis. The study is prepared in the way of establishing product risks and faults, and showing the impact of potential process by Boolean algebra.

  13. Spectral analysis to detection of short circuit fault of solar photovoltaic modules in strings

    International Nuclear Information System (INIS)

    Sevilla-Camacho, P.Y.; Robles-Ocampo, J.B.; Zuñiga-Reyes, Marco A.

    2017-01-01

    This research work presents a method to detect the number of short circuit faulted solar photovoltaic modules in strings of a photovoltaic system by taking into account speed, safety, and non-use of sensors and specialized and expensive equipment. The method consists on apply the spectral analysis and statistical techniques to the alternating current output voltage of a string and detect the number of failed modules through the changes in the amplitude of the component frequency of 12 kHz. For that, the analyzed string is disconnected of the array; and a small pulsed voltage signal of frequency of 12 kHz introduces him under dark condition and controlled temperature. Previous to the analysis, the signal is analogic filtered in order to reduce the direct current signal component. The spectral analysis technique used is the Fast Fourier Transform. The obtained experimental results were validated through simulation of the alternating current equivalent circuit of a solar cell. In all experimental and simulated test, the method allowed to identify correctly the number of photovoltaic modules with short circuit in the analyzed string. (author)

  14. Multiple-Fault Detection Methodology Based on Vibration and Current Analysis Applied to Bearings in Induction Motors and Gearboxes on the Kinematic Chain

    Directory of Open Access Journals (Sweden)

    Juan Jose Saucedo-Dorantes

    2016-01-01

    Full Text Available Gearboxes and induction motors are important components in industrial applications and their monitoring condition is critical in the industrial sector so as to reduce costs and maintenance downtimes. There are several techniques associated with the fault diagnosis in rotating machinery; however, vibration and stator currents analysis are commonly used due to their proven reliability. Indeed, vibration and current analysis provide fault condition information by means of the fault-related spectral component identification. This work presents a methodology based on vibration and current analysis for the diagnosis of wear in a gearbox and the detection of bearing defect in an induction motor both linked to the same kinematic chain; besides, the location of the fault-related components for analysis is supported by the corresponding theoretical models. The theoretical models are based on calculation of characteristic gearbox and bearings fault frequencies, in order to locate the spectral components of the faults. In this work, the influence of vibrations over the system is observed by performing motor current signal analysis to detect the presence of faults. The obtained results show the feasibility of detecting multiple faults in a kinematic chain, making the proposed methodology suitable to be used in the application of industrial machinery diagnosis.

  15. Pteros 2.0: Evolution of the fast parallel molecular analysis library for C++ and python.

    Science.gov (United States)

    Yesylevskyy, Semen O

    2015-07-15

    Pteros is the high-performance open-source library for molecular modeling and analysis of molecular dynamics trajectories. Starting from version 2.0 Pteros is available for C++ and Python programming languages with very similar interfaces. This makes it suitable for writing complex reusable programs in C++ and simple interactive scripts in Python alike. New version improves the facilities for asynchronous trajectory reading and parallel execution of analysis tasks by introducing analysis plugins which could be written in either C++ or Python in completely uniform way. The high level of abstraction provided by analysis plugins greatly simplifies prototyping and implementation of complex analysis algorithms. Pteros is available for free under Artistic License from http://sourceforge.net/projects/pteros/. © 2015 Wiley Periodicals, Inc.

  16. Possible origin and significance of extension-parallel drainages in Arizona's metamophic core complexes

    Science.gov (United States)

    Spencer, J.E.

    2000-01-01

    -temperature, surface conditions. An alternative hypothesis, that drainages were localized by small fault grooves as footwalls were uncovered, is not supported by analysis of a down-plunge fault projection for the southern Rincon Mountains that shows a linear drainage aligned with the crest of a small antiformal groove on the detachment fault, but this process could have been effective elsewhere. Lineation-parallel drainages now plunge gently southwestward on the southwest ends of antiformal corrugations in the South and Buckskin Mountains, but these drainages must have originally plunged northeastward if they formed by either of the two alternative processes proposed here. Footwall exhumation and incision by northeast-flowing streams was apparently followed by core-complex arching and drainage reversal.

  17. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  18. Probabilistic Risk Assessment of Hydraulic Fracturing in Unconventional Reservoirs by Means of Fault Tree Analysis: An Initial Discussion

    Science.gov (United States)

    Rodak, C. M.; McHugh, R.; Wei, X.

    2016-12-01

    The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.

  19. Kinematics and dynamics analysis of a quadruped walking robot with parallel leg mechanism

    Science.gov (United States)

    Wang, Hongbo; Sang, Lingfeng; Hu, Xing; Zhang, Dianfan; Yu, Hongnian

    2013-09-01

    It is desired to require a walking robot for the elderly and the disabled to have large capacity, high stiffness, stability, etc. However, the existing walking robots cannot achieve these requirements because of the weight-payload ratio and simple function. Therefore, Improvement of enhancing capacity and functions of the walking robot is an important research issue. According to walking requirements and combining modularization and reconfigurable ideas, a quadruped/biped reconfigurable walking robot with parallel leg mechanism is proposed. The proposed robot can be used for both a biped and a quadruped walking robot. The kinematics and performance analysis of a 3-UPU parallel mechanism which is the basic leg mechanism of a quadruped walking robot are conducted and the structural parameters are optimized. The results show that performance of the walking robot is optimal when the circumradius R, r of the upper and lower platform of leg mechanism are 161.7 mm, 57.7 mm, respectively. Based on the optimal results, the kinematics and dynamics of the quadruped walking robot in the static walking mode are derived with the application of parallel mechanism and influence coefficient theory, and the optimal coordination distribution of the dynamic load for the quadruped walking robot with over-determinate inputs is analyzed, which solves dynamic load coupling caused by the branches’ constraint of the robot in the walk process. Besides laying a theoretical foundation for development of the prototype, the kinematics and dynamics studies on the quadruped walking robot also boost the theoretical research of the quadruped walking and the practical applications of parallel mechanism.

  20. A decision-making framework for protecting process plants from flooding based on fault tree analysis

    International Nuclear Information System (INIS)

    Hauptmanns, Ulrich

    2010-01-01

    The protection of process plants from external events is mandatory in the Seveso Directive. Among these events figures the possibility of inundation of a plant, which may cause a hazard by disabling technical components and obviating operator interventions. A methodological framework for dealing with hazards from potential flooding events is presented. It combines an extension of the fault tree method with generic properties of flooding events in rivers and of dikes, which should be adapted to site-specific characteristics in a concrete case. Thus, a rational basis for deciding whether upgrading is required or not and which of the components should be upgraded is provided. Both the deterministic and the probabilistic approaches are compared. Preference is given to the probabilistic one. The conclusions drawn naturally depend on the scope and detail of the model calculations and the decision criterion adopted. The latter has to be supplied from outside the analysis, e.g. by the analyst himself, the plant operator or the competent authority. It turns out that decision-making is only viable if the boundary conditions for both the procedure of analysis and the decision criterion are clear.