WorldWideScience

Sample records for parallel fault-secure encoders

  1. Fault isolation in parallel coupled wind turbine converters

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Thøgersen, Paul Bach; Stoustrup, Jakob

    2010-01-01

    Parallel converters in wind turbine give a number advantages, such as fault tolerance due to the redundant converters. However, it might be difficult to isolate gain faults in one of the converters if only a combined power measurement is available. In this paper a scheme using orthogonal power...... references to the converters is proposed. Simulations on a wind turbine with 5 parallel converters show a clear potential of this scheme for isolation of this gain fault to the correct converter in which the fault occurs....

  2. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  3. High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Directory of Open Access Journals (Sweden)

    H. Y. Su

    2012-04-01

    Full Text Available This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms.

  4. The role of bed-parallel slip in the development of complex normal fault zones

    Science.gov (United States)

    Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros

    2017-04-01

    Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.

  5. Dual beam encoded extended fractional Fourier transform security ...

    Indian Academy of Sciences (India)

    This paper describes a simple method for making dual beam encoded extended fractional Fourier transform (EFRT) security holograms. The hologram possesses different stages of encoding so that security features are concealed and remain invisible to the counterfeiter. These concealed and encoded anticounterfeit ...

  6. A novel method for intelligent fault diagnosis of rolling bearings using ensemble deep auto-encoders

    Science.gov (United States)

    Shao, Haidong; Jiang, Hongkai; Lin, Ying; Li, Xingqiu

    2018-03-01

    Automatic and accurate identification of rolling bearings fault categories, especially for the fault severities and fault orientations, is still a major challenge in rotating machinery fault diagnosis. In this paper, a novel method called ensemble deep auto-encoders (EDAEs) is proposed for intelligent fault diagnosis of rolling bearings. Firstly, different activation functions are employed as the hidden functions to design a series of auto-encoders (AEs) with different characteristics. Secondly, EDAEs are constructed with various auto-encoders for unsupervised feature learning from the measured vibration signals. Finally, a combination strategy is designed to ensure accurate and stable diagnosis results. The proposed method is applied to analyze the experimental bearing vibration signals. The results confirm that the proposed method can get rid of the dependence on manual feature extraction and overcome the limitations of individual deep learning models, which is more effective than the existing intelligent diagnosis methods.

  7. Security enhanced BioEncoding for protecting iris codes

    Science.gov (United States)

    Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya

    2011-06-01

    Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.

  8. Interactive animation of fault-tolerant parallel algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Apgar, S.W.

    1992-02-01

    Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes.

  9. Fault tolerant deterministic secure quantum communication using logical Bell states against collective noise

    International Nuclear Information System (INIS)

    Wang Chao; Liu Jian-Wei; Shang Tao; Chen Xiu-Bo; Bi Ya-Gang

    2015-01-01

    This study proposes two novel fault tolerant deterministic secure quantum communication (DSQC) schemes resistant to collective noise using logical Bell states. Either DSQC scheme is constructed based on a new coding function, which is designed by exploiting the property of the corresponding logical Bell states immune to collective-dephasing noise and collective-rotation noise, respectively. The secret message can be encoded by two simple unitary operations and decoded by merely performing Bell measurements, which can make the proposed scheme more convenient in practical applications. Moreover, the strategy of one-step quanta transmission, together with the technique of decoy logical qubits checking not only reduces the influence of other noise existing in a quantum channel, but also guarantees the security of the communication between two legitimate users. The final analysis shows that the proposed schemes are feasible and robust against various well-known attacks over the collective noise channel. (paper)

  10. Design and implementation of parallel video encoding strategies using divisible load analysis

    NARCIS (Netherlands)

    Li, Ping; Veeravalli, Bharadwaj; Kassim, A.A.

    2005-01-01

    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems

  11. Multiple-stage pure phase encoding with biometric information

    Science.gov (United States)

    Chen, Wen

    2018-01-01

    In recent years, many optical systems have been developed for securing information, and optical encryption/encoding has attracted more and more attention due to the marked advantages, such as parallel processing and multiple-dimensional characteristics. In this paper, an optical security method is presented based on pure phase encoding with biometric information. Biometric information (such as fingerprint) is employed as security keys rather than plaintext used in conventional optical security systems, and multiple-stage phase-encoding-based optical systems are designed for generating several phase-only masks with biometric information. Subsequently, the extracted phase-only masks are further used in an optical setup for encoding an input image (i.e., plaintext). Numerical simulations are conducted to illustrate the validity, and the results demonstrate that high flexibility and high security can be achieved.

  12. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  13. Security of BB84 with weak randomness and imperfect qubit encoding

    Science.gov (United States)

    Zhao, Liang-Yuan; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Fang, Xi; Han, Zheng-Fu; Huang, Wei

    2018-03-01

    The main threats for the well-known Bennett-Brassard 1984 (BB84) practical quantum key distribution (QKD) systems are that its encoding is inaccurate and measurement device may be vulnerable to particular attacks. Thus, a general physical model or security proof to tackle these loopholes simultaneously and quantitatively is highly desired. Here we give a framework on the security of BB84 when imperfect qubit encoding and vulnerability of measurement device are both considered. In our analysis, the potential attacks to measurement device are generalized by the recently proposed weak randomness model which assumes the input random numbers are partially biased depending on a hidden variable planted by an eavesdropper. And the inevitable encoding inaccuracy is also introduced here. From a fundamental view, our work reveals the potential information leakage due to encoding inaccuracy and weak randomness input. For applications, our result can be viewed as a useful tool to quantitatively evaluate the security of a practical QKD system.

  14. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  15. Fault detection for hydraulic pump based on chaotic parallel RBF network

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2011-01-01

    Full Text Available Abstract In this article, a parallel radial basis function network in conjunction with chaos theory (CPRBF network is presented, and applied to practical fault detection for hydraulic pump, which is a critical component in aircraft. The CPRBF network consists of a number of radial basis function (RBF subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of CPRBF is a weighted sum of all RBF subnets. It was first trained using the dataset from normal state without fault, and then a residual error generator was designed to detect failures based on the trained CPRBF network. Then, failure detection can be achieved by the analysis of the residual error. Finally, two case studies are introduced to compare the proposed CPRBF network with traditional RBF networks, in terms of prediction and detection accuracy.

  16. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  17. Encoding methods for B1+ mapping in parallel transmit systems at ultra high field

    Science.gov (United States)

    Tse, Desmond H. Y.; Poole, Michael S.; Magill, Arthur W.; Felder, Jörg; Brenner, Daniel; Jon Shah, N.

    2014-08-01

    Parallel radiofrequency (RF) transmission, either in the form of RF shimming or pulse design, has been proposed as a solution to the B1+ inhomogeneity problem in ultra high field magnetic resonance imaging. As a prerequisite, accurate B1+ maps from each of the available transmit channels are required. In this work, four different encoding methods for B1+ mapping, namely 1-channel-on, all-channels-on-except-1, all-channels-on-1-inverted and Fourier phase encoding, were evaluated using dual refocusing acquisition mode (DREAM) at 9.4 T. Fourier phase encoding was demonstrated in both phantom and in vivo to be the least susceptible to artefacts caused by destructive RF interference at 9.4 T. Unlike the other two interferometric encoding schemes, Fourier phase encoding showed negligible dependency on the initial RF phase setting and therefore no prior B1+ knowledge is required. Fourier phase encoding also provides a flexible way to increase the number of measurements to increase SNR, and to allow further reduction of artefacts by weighted decoding. These advantages of Fourier phase encoding suggest that it is a good choice for B1+ mapping in parallel transmit systems at ultra high field.

  18. Frame-Based and Subpicture-Based Parallelization Approaches of the HEVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Héctor Migallón

    2018-05-01

    Full Text Available The most recent video coding standard, High Efficiency Video Coding (HEVC, is able to significantly improve the compression performance at the expense of a huge computational complexity increase with respect to its predecessor, H.264/AVC. Parallel versions of the HEVC encoder may help to reduce the overall encoding time in order to make it more suitable for practical applications. In this work, we study two parallelization strategies. One of them follows a coarse-grain approach, where parallelization is based on frames, and the other one follows a fine-grain approach, where parallelization is performed at subpicture level. Two different frame-based approaches have been developed. The first one only uses MPI and the second one is a hybrid MPI/OpenMP algorithm. An exhaustive experimental test was carried out to study the performance of both approaches in order to find out the best setup in terms of parallel efficiency and coding performance. Both frame-based and subpicture-based approaches are compared under the same hardware platform. Although subpicture-based schemes provide an excellent performance with high-resolution video sequences, scalability is limited by resolution, and the coding performance worsens by increasing the number of processes. Conversely, the proposed frame-based approaches provide the best results with respect to both parallel performance (increasing scalability and coding performance (not degrading the rate/distortion behavior.

  19. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    Science.gov (United States)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  20. Parallel-Bit Stream for Securing Iris Recognition

    OpenAIRE

    Elsayed Mostafa; Maher Mansour; Heba Saad

    2012-01-01

    Biometrics-based authentication schemes have usability advantages over traditional password-based authentication schemes. However, biometrics raises several privacy concerns, it has disadvantages comparing to traditional password in which it is not secured and non revocable. In this paper, we propose a fast method for securing revocable iris template using parallel-bit stream watermarking to overcome these problems. Experimental results prove that the proposed method has low computation time ...

  1. Quantitative security and safety analysis with attack-fault trees

    NARCIS (Netherlands)

    Kumar, Rajesh; Stoelinga, Mariëlle Ida Antoinette

    2017-01-01

    Cyber physical systems, like power plants, medical devices and data centers have to meet high standards, both in terms of safety (i.e. absence of unintentional failures) and security (i.e. no disruptions due to malicious attacks). This paper presents attack fault trees (AFTs), a formalism that

  2. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  3. A Fault-Tolerant Parallel Structure of Single-Phase Full-Bridge Rectifiers for a Wound-Field Doubly Salient Generator

    DEFF Research Database (Denmark)

    Chen, Zhihui; Chen, Ran; Chen, Zhe

    2013-01-01

    The fault-tolerance design is widely adopted for high-reliability applications. In this paper, a parallel structure of single-phase full-bridge rectifiers (FBRs) (PS-SPFBR) is proposed for a wound-field doubly salient generator. The analysis shows the potential fault-tolerance capability of the PS...

  4. Security enhancement of double random phase encoding using rear-mounted phase masking

    Science.gov (United States)

    Chen, Junxin; Zhang, Yu; Li, Jinchang; Zhang, Li-bo

    2018-02-01

    In this paper, a security enhancement for double random phase encoding (DRPE) by introducing a rear-mounted phase masking procedure is presented. Based on exhaustively studying the cryptanalysis achievements of DRPE and its variants, invalidation of the second lens, which plays a critical role in cryptanalyzing processes, is concluded. The improved system can exploit the security potential of the second lens and consequently strengthen the security of DRPE. Experimental results and security analyses are presented in detail to demonstrate the security potential of the proposed cryptosystem.

  5. Online Diagnosis for the Capacity Fade Fault of a Parallel-Connected Lithium Ion Battery Group

    Directory of Open Access Journals (Sweden)

    Hua Zhang

    2016-05-01

    Full Text Available In a parallel-connected battery group (PCBG, capacity degradation is usually caused by the inconsistency between a faulty cell and other normal cells, and the inconsistency occurs due to two potential causes: an aging inconsistency fault or a loose contacting fault. In this paper, a novel method is proposed to perform online and real-time capacity fault diagnosis for PCBGs. Firstly, based on the analysis of parameter variation characteristics of a PCBG with different fault causes, it is found that PCBG resistance can be taken as an indicator for both seeking the faulty PCBG and distinguishing the fault causes. On one hand, the faulty PCBG can be identified by comparing the PCBG resistance among PCBGs; on the other hand, two fault causes can be distinguished by comparing the variance of the PCBG resistances. Furthermore, for online applications, a novel recursive-least-squares algorithm with restricted memory and constraint (RLSRMC, in which the constraint is added to eliminate the “imaginary number” phenomena of parameters, is developed and used in PCBG resistance identification. Lastly, fault simulation and validation results demonstrate that the proposed methods have good accuracy and reliability.

  6. Fault Tree Analysis for Safety/Security Verification in Aviation Software

    Directory of Open Access Journals (Sweden)

    Andrew J. Kornecki

    2013-01-01

    Full Text Available The Next Generation Air Traffic Management system (NextGen is a blueprint of the future National Airspace System. Supporting NextGen is a nation-wide Aviation Simulation Network (ASN, which allows integration of a variety of real-time simulations to facilitate development and validation of the NextGen software by simulating a wide range of operational scenarios. The ASN system is an environment, including both simulated and human-in-the-loop real-life components (pilots and air traffic controllers. Real Time Distributed Simulation (RTDS developed at Embry Riddle Aeronautical University, a suite of applications providing low and medium fidelity en-route simulation capabilities, is one of the simulations contributing to the ASN. To support the interconnectivity with the ASN, we designed and implemented a dedicated gateway acting as an intermediary, providing logic for two-way communication and transfer messages between RTDS and ASN and storage for the exchanged data. It has been necessary to develop and analyze safety/security requirements for the gateway software based on analysis of system assets, hazards, threats and attacks related to ultimate real-life future implementation. Due to the nature of the system, the focus was placed on communication security and the related safety of the impacted aircraft in the simulation scenario. To support development of safety/security requirements, a well-established fault tree analysis technique was used. This fault tree model-based analysis, supported by a commercial tool, was a foundation to propose mitigations assuring the gateway system safety and security

  7. Autonomous Voltage Security Regions to Prevent Cascading Trip Faults in Wind Turbine Generators

    DEFF Research Database (Denmark)

    Niu, Tao; Guo, Qinglai; Sun, Hongbin

    2016-01-01

    Cascading trip faults in large-scale wind power centralized integration areas bring new challenges to the secure operation of power systems. In order to deal with the complexity of voltage security regions and the computation difficulty, this paper proposes an autonomous voltage security region...... wind farm, an AVSR is determined to guarantee the normal operation of each wind turbine generator (WTG), while in the control center, each region is designed in order to guarantee secure operation both under normal conditions and after an N-1 contingency. A real system in Northern China was used...

  8. Open-circuit fault detection and tolerant operation for a parallel-connected SAB DC-DC converter

    DEFF Research Database (Denmark)

    Park, Kiwoo; Chen, Zhe

    2014-01-01

    This paper presents an open-circuit fault detection method and its tolerant control strategy for a Parallel-Connected Single Active Bridge (PCSAB) dc-dc converter. The structural and operational characteristics of the PCSAB converter lead to several advantages especially for high power applicatio...

  9. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  10. Evaluation of fault-normal/fault-parallel directions rotated ground motions for response history analysis of an instrumented six-story building

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2012-01-01

    According to regulatory building codes in United States (for example, 2010 California Building Code), at least two horizontal ground-motion components are required for three-dimensional (3D) response history analysis (RHA) of buildings. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHA analyses should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak responses of engineering demand parameters (EDPs) were obtained for rotation angles ranging from 0° through 180° for evaluating the FN/FP directions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  11. On the use of non-coherent fault trees in safety and security studies

    International Nuclear Information System (INIS)

    Contini, S.; Cojazzi, G.G.M.; Renda, G.

    2008-01-01

    This paper gives some insights on the usefulness of non-coherent fault trees in system modelling from both the point of view of safety and security. A safety-related system can evolve from the working states to failed states through degraded states, i.e. working state, but in a degraded mode. In practical applications the degraded states may be of particular interest due e.g. to the associated risk increase or the different types of consequent actions. The top events definitions of such states contain the working conditions of some sub-systems/components. How the use of non-coherent fault trees can greatly simplify both the modelling and quantification of these states is shown in this paper. Some considerations about the interpretation of the importance indexes of negated basic events are also briefly described. When dealing with security applications, there is a need to cope not only with stochastic events, such as component failures and human errors, but also with deliberate intentional actions, whose successes might be characterised by high probability values. Different mutually exclusive attack scenarios may be envisaged for a given system. Hence, the essential feature of a fault tree analyser is the capability to determine the exact value of the top event probability containing mutually exclusive events. It is also shown that in these cases the use of non-coherent fault trees allows solving the problem with limited effort

  12. Hydraulic Fracture Induced Seismicity During A Multi-Stage Pad Completion in Western Canada: Evidence of Activation of Multiple, Parallel Faults

    Science.gov (United States)

    Maxwell, S.; Garrett, D.; Huang, J.; Usher, P.; Mamer, P.

    2017-12-01

    Following reports of injection induced seismicity in the Western Canadian Sedimentary Basin, regulators have imposed seismic monitoring and traffic light protocols for fracturing operations in specific areas. Here we describe a case study in one of these reservoirs, the Montney Shale in NE British Columbia, where induced seismicity was monitored with a local array during multi-stage hydraulic fracture stimulations on several wells from a single drilling pad. Seismicity primarily occurred during the injection time periods, and correlated with periods of high injection rates and wellhead pressures above fracturing pressures. Sequential hydraulic fracture stages were found to progressively activate several parallel, critically-stressed faults, as illuminated by multiple linear hypocenter patterns in the range between Mw 1 and 3. Moment tensor inversion of larger events indicated a double-couple mechanism consistent with the regional strike-slip stress state and the hypocenter lineations. The critically-stressed faults obliquely cross the well paths which were purposely drilled parallel to the minimum principal stress direction. Seismicity on specific faults started and stopped when fracture initiation points of individual injection stages were proximal to the intersection of the fault and well. The distance ranges when the seismicity occurs is consistent with expected hydraulic fracture dimensions, suggesting that the induced fault slip only occurs when a hydraulic fracture grows directly into the fault and the faults are temporarily exposed to significantly elevated fracture pressures during the injection. Some faults crossed multiple wells and the seismicity was found to restart during injection of proximal stages on adjacent wells, progressively expanding the seismogenic zone of the fault. Progressive fault slip is therefore inferred from the seismicity migrating further along the faults during successive injection stages. An accelerometer was also deployed close

  13. Parallel, but Dissociable, Processing in Discrete Corticostriatal Inputs Encodes Skill Learning.

    Science.gov (United States)

    Kupferschmidt, David A; Juczewski, Konrad; Cui, Guohong; Johnson, Kari A; Lovinger, David M

    2017-10-11

    Changes in cortical and striatal function underlie the transition from novel actions to refined motor skills. How discrete, anatomically defined corticostriatal projections function in vivo to encode skill learning remains unclear. Using novel fiber photometry approaches to assess real-time activity of associative inputs from medial prefrontal cortex to dorsomedial striatum and sensorimotor inputs from motor cortex to dorsolateral striatum, we show that associative and sensorimotor inputs co-engage early in action learning and disengage in a dissociable manner as actions are refined. Disengagement of associative, but not sensorimotor, inputs predicts individual differences in subsequent skill learning. Divergent somatic and presynaptic engagement in both projections during early action learning suggests potential learning-related in vivo modulation of presynaptic corticostriatal function. These findings reveal parallel processing within associative and sensorimotor circuits that challenges and refines existing views of corticostriatal function and expose neuronal projection- and compartment-specific activity dynamics that encode and predict action learning. Published by Elsevier Inc.

  14. Transformation-based exploration of data parallel architecture for customizable hardware : a JPEG encoder case study

    NARCIS (Netherlands)

    Corvino, R.; Diken, E.; Gamatié, A.; Jozwiak, L.

    2012-01-01

    In this paper, we present a method for the design of MPSoCs for complex data-intensive applications. This method aims at a blend exploration of the communication, the memory system architecture and the computation resource parallelism. The proposed method is exemplified on a JPEG Encoder case study

  15. A simple component-connection method for building binary decision diagrams encoding a fault tree

    International Nuclear Information System (INIS)

    Way, Y.-S.; Hsia, D.-Y.

    2000-01-01

    A simple new method for building binary decision diagrams (BDDs) encoding a fault tree (FT) is provided in this study. We first decompose the FT into FT-components. Each of them is a single descendant (SD) gate-sequence. Following the node-connection rule, the BDD-component encoding an SD FT-component can each be found to be an SD node-sequence. By successively connecting the BDD-components one by one, the BDD for the entire FT is thus obtained. During the node-connection and component-connection, reduction rules might need to be applied. An example FT is used throughout the article to explain the procedure step by step. Our method proposed is a hybrid one for FT analysis. Some algorithms or techniques used in the conventional FT analysis or the newer BDD approach may be applied to our case; our ideas mentioned in the article might be referred by the two methods

  16. Modeling Security Aspects of Network

    Science.gov (United States)

    Schoch, Elmar

    With more and more widespread usage of computer systems and networks, dependability becomes a paramount requirement. Dependability typically denotes tolerance or protection against all kinds of failures, errors and faults. Sources of failures can basically be accidental, e.g., in case of hardware errors or software bugs, or intentional due to some kind of malicious behavior. These intentional, malicious actions are subject of security. A more complete overview on the relations between dependability and security can be found in [31]. In parallel to the increased use of technology, misuse also has grown significantly, requiring measures to deal with it.

  17. Prioritizing multiple therapeutic targets in parallel using automated DNA-encoded library screening

    Science.gov (United States)

    Machutta, Carl A.; Kollmann, Christopher S.; Lind, Kenneth E.; Bai, Xiaopeng; Chan, Pan F.; Huang, Jianzhong; Ballell, Lluis; Belyanskaya, Svetlana; Besra, Gurdyal S.; Barros-Aguirre, David; Bates, Robert H.; Centrella, Paolo A.; Chang, Sandy S.; Chai, Jing; Choudhry, Anthony E.; Coffin, Aaron; Davie, Christopher P.; Deng, Hongfeng; Deng, Jianghe; Ding, Yun; Dodson, Jason W.; Fosbenner, David T.; Gao, Enoch N.; Graham, Taylor L.; Graybill, Todd L.; Ingraham, Karen; Johnson, Walter P.; King, Bryan W.; Kwiatkowski, Christopher R.; Lelièvre, Joël; Li, Yue; Liu, Xiaorong; Lu, Quinn; Lehr, Ruth; Mendoza-Losana, Alfonso; Martin, John; McCloskey, Lynn; McCormick, Patti; O'Keefe, Heather P.; O'Keeffe, Thomas; Pao, Christina; Phelps, Christopher B.; Qi, Hongwei; Rafferty, Keith; Scavello, Genaro S.; Steiginga, Matt S.; Sundersingh, Flora S.; Sweitzer, Sharon M.; Szewczuk, Lawrence M.; Taylor, Amy; Toh, May Fern; Wang, Juan; Wang, Minghui; Wilkins, Devan J.; Xia, Bing; Yao, Gang; Zhang, Jean; Zhou, Jingye; Donahue, Christine P.; Messer, Jeffrey A.; Holmes, David; Arico-Muendel, Christopher C.; Pope, Andrew J.; Gross, Jeffrey W.; Evindar, Ghotas

    2017-07-01

    The identification and prioritization of chemically tractable therapeutic targets is a significant challenge in the discovery of new medicines. We have developed a novel method that rapidly screens multiple proteins in parallel using DNA-encoded library technology (ELT). Initial efforts were focused on the efficient discovery of antibacterial leads against 119 targets from Acinetobacter baumannii and Staphylococcus aureus. The success of this effort led to the hypothesis that the relative number of ELT binders alone could be used to assess the ligandability of large sets of proteins. This concept was further explored by screening 42 targets from Mycobacterium tuberculosis. Active chemical series for six targets from our initial effort as well as three chemotypes for DHFR from M. tuberculosis are reported. The findings demonstrate that parallel ELT selections can be used to assess ligandability and highlight opportunities for successful lead and tool discovery.

  18. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  19. FPGAs and parallel architectures for aerospace applications soft errors and fault-tolerant design

    CERN Document Server

    Rech, Paolo

    2016-01-01

    This book introduces the concepts of soft errors in FPGAs, as well as the motivation for using commercial, off-the-shelf (COTS) FPGAs in mission-critical and remote applications, such as aerospace.  The authors describe the effects of radiation in FPGAs, present a large set of soft-error mitigation techniques that can be applied in these circuits, as well as methods for qualifying these circuits under radiation.  Coverage includes radiation effects in FPGAs, fault-tolerant techniques for FPGAs, use of COTS FPGAs in aerospace applications, experimental data of FPGAs under radiation, FPGA embedded processors under radiation, and fault injection in FPGAs. Since dedicated parallel processing architectures such as GPUs have become more desirable in aerospace applications due to high computational power, GPU analysis under radiation is also discussed. ·         Discusses features and drawbacks of reconfigurability methods for FPGAs, focused on aerospace applications; ·         Explains how radia...

  20. Improving the security of a parallel keyed hash function based on chaotic maps

    Energy Technology Data Exchange (ETDEWEB)

    Xiao Di, E-mail: xiaodi_cqu@hotmail.co [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Liao Xiaofeng [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Wang Yong [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)] [College of Economy and Management, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China)

    2009-11-23

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  1. Improving the security of a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wang Yong

    2009-01-01

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  2. Extension parallel to the rift zone during segmented fault growth: application to the evolution of the NE Atlantic

    Directory of Open Access Journals (Sweden)

    A. Bubeck

    2017-11-01

    Full Text Available The mechanical interaction of propagating normal faults is known to influence the linkage geometry of first-order faults, and the development of second-order faults and fractures, which transfer displacement within relay zones. Here we use natural examples of growth faults from two active volcanic rift zones (Koa`e, island of Hawai`i, and Krafla, northern Iceland to illustrate the importance of horizontal-plane extension (heave gradients, and associated vertical axis rotations, in evolving continental rift systems. Second-order extension and extensional-shear faults within the relay zones variably resolve components of regional extension, and components of extension and/or shortening parallel to the rift zone, to accommodate the inherently three-dimensional (3-D strains associated with relay zone development and rotation. Such a configuration involves volume increase, which is accommodated at the surface by open fractures; in the subsurface this may be accommodated by veins or dikes oriented obliquely and normal to the rift axis. To consider the scalability of the effects of relay zone rotations, we compare the geometry and kinematics of fault and fracture sets in the Koa`e and Krafla rift zones with data from exhumed contemporaneous fault and dike systems developed within a > 5×104 km2 relay system that developed during formation of the NE Atlantic margins. Based on the findings presented here we propose a new conceptual model for the evolution of segmented continental rift basins on the NE Atlantic margins.

  3. Fault Diagnosis and Fault Tolerant Control with Application on a Wind Turbine Low Speed Shaft Encoder

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Sardi, Hector Eloy Sanchez; Escobet, Teressa

    2015-01-01

    tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

  4. Extensions to the Parallel Real-Time Artificial Intelligence System (PRAIS) for fault-tolerant heterogeneous cycle-stealing reasoning

    Science.gov (United States)

    Goldstein, David

    1991-01-01

    Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.

  5. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  6. Design of parallel dual-energy X-ray beam and its performance for security radiography

    International Nuclear Information System (INIS)

    Kim, Kwang Hyun; Myoung, Sung Min; Chung, Yong Hyun

    2011-01-01

    A new concept of dual-energy X-ray beam generation and acquisition of dual-energy security radiography is proposed. Erbium (Er) and rhodium (Rh) with a copper filter were positioned in front of X-ray tube to generate low- and high-energy X-ray spectra. Low- and high-energy X-rays were guided to separately enter into two parallel detectors. Monte Carlo code of MCNPX was used to derive an optimum thickness of each filter for improved dual X-ray image quality. It was desired to provide separation ability between organic and inorganic matters for the condition of 140 kVp/0.8 mA as used in the security application. Acquired dual-energy X-ray beams were evaluated by the dual-energy Z-map yielding enhanced performance compared with a commercial dual-energy detector. A collimator for the parallel dual-energy X-ray beam was designed to minimize X-ray beam interference between low- and high-energy parallel beams for 500 mm source-to-detector distance.

  7. Double random phase spread spectrum spread space technique for secure parallel optical multiplexing with individual encryption key

    Science.gov (United States)

    Hennelly, B. M.; Javidi, B.; Sheridan, J. T.

    2005-09-01

    A number of methods have been recently proposed in the literature for the encryption of 2-D information using linear optical systems. In particular the double random phase encoding system has received widespread attention. This system uses two Random Phase Keys (RPK) positioned in the input spatial domain and the spatial frequency domain and if these random phases are described by statistically independent white noises then the encrypted image can be shown to be a white noise. Decryption only requires knowledge of the RPK in the frequency domain. The RPK may be implemented using a Spatial Light Modulators (SLM). In this paper we propose and investigate the use of SLMs for secure optical multiplexing. We show that in this case it is possible to encrypt multiple images in parallel and multiplex them for transmission or storage. The signal energy is effectively spread in the spatial frequency domain. As expected the number of images that can be multiplexed together and recovered without loss is proportional to the ratio of the input image and the SLM resolution. Many more images may be multiplexed with some loss in recovery. Furthermore each individual encryption is more robust than traditional double random phase encoding since decryption requires knowledge of both RPK and a lowpass filter in order to despread the spectrum and decrypt the image. Numerical simulations are presented and discussed.

  8. Landforms along transverse faults parallel to axial zone of folded mountain front, north-eastern Kumaun Sub-Himalaya, India

    Science.gov (United States)

    Luirei, Khayingshing; Bhakuni, S. S.; Negi, Sanjay S.

    2017-02-01

    The shape of the frontal part of the Himalaya around the north-eastern corner of the Kumaun Sub-Himalaya, along the Kali River valley, is defined by folded hanging wall rocks of the Himalayan Frontal Thrust (HFT). Two parallel faults (Kalaunia and Tanakpur faults) trace along the axial zone of the folded HFT. Between these faults, the hinge zone of this transverse fold is relatively straight and along these faults, the beds abruptly change their attitudes and their widths are tectonically attenuated across two hinge lines of fold. The area is constituted of various surfaces of coalescing fans and terraces. Fans comprise predominantly of sandstone clasts laid down by the steep-gradient streams originating from the Siwalik range. The alluvial fans are characterised by compound and superimposed fans with high relief, which are generated by the tectonic activities associated with the thrusting along the HFT. The truncated fan along the HFT has formed a 100 m high-escarpment running E-W for ˜5 km. Quaternary terrace deposits suggest two phases of tectonic uplift in the basal part of the hanging wall block of the HFT dipping towards the north. The first phase is represented by tilting of the terrace sediments by ˜30 ∘ towards the NW; while the second phase is evident from deformed structures in the terrace deposit comprising mainly of reverse faults, fault propagation folds, convolute laminations, flower structures and back thrust faults. The second phase produced ˜1.0 m offset of stratification of the terrace along a thrust fault. Tectonic escarpments are recognised across the splay thrust near south of the HFT trace. The south facing hill slopes exhibit numerous landslides along active channels incising the hanging wall rocks of the HFT. The study area shows weak seismicity. The major Moradabad Fault crosses near the study area. This transverse fault may have suppressed the seismicity in the Tanakpur area, and the movement along the Moradabad and Kasganj

  9. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  10. Known-plaintext attack on the double phase encoding and its implementation with parallel hardware

    Science.gov (United States)

    Wei, Hengzheng; Peng, Xiang; Liu, Haitao; Feng, Songlin; Gao, Bruce Z.

    2008-03-01

    A known-plaintext attack on the double phase encryption scheme implemented with parallel hardware is presented. The double random phase encoding (DRPE) is one of the most representative optical cryptosystems developed in mid of 90's and derives quite a few variants since then. Although the DRPE encryption system has a strong power resisting to a brute-force attack, the inherent architecture of DRPE leaves a hidden trouble due to its linearity nature. Recently the real security strength of this opto-cryptosystem has been doubted and analyzed from the cryptanalysis point of view. In this presentation, we demonstrate that the optical cryptosystems based on DRPE architecture are vulnerable to known-plain text attack. With this attack the two encryption keys in the DRPE can be accessed with the help of the phase retrieval technique. In our approach, we adopt hybrid input-output algorithm (HIO) to recover the random phase key in the object domain and then infer the key in frequency domain. Only a plaintext-ciphertext pair is sufficient to create vulnerability. Moreover this attack does not need to select particular plaintext. The phase retrieval technique based on HIO is an iterative process performing Fourier transforms, so it fits very much into the hardware implementation of the digital signal processor (DSP). We make use of the high performance DSP to accomplish the known-plaintext attack. Compared with the software implementation, the speed of the hardware implementation is much fast. The performance of this DSP-based cryptanalysis system is also evaluated.

  11. Integrating cyber attacks within fault trees

    International Nuclear Information System (INIS)

    Nai Fovino, Igor; Masera, Marcelo; De Cian, Alessio

    2009-01-01

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  12. Integrating cyber attacks within fault trees

    Energy Technology Data Exchange (ETDEWEB)

    Nai Fovino, Igor [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy)], E-mail: igor.nai@jrc.it; Masera, Marcelo [Joint Research Centre - EC, Institute for the Protection and Security of the Citizen, Ispra, VA (Italy); De Cian, Alessio [Department of Electrical Engineering, University di Genova, Genoa (Italy)

    2009-09-15

    In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

  13. Fault-tolerant computing systems

    International Nuclear Information System (INIS)

    Dal Cin, M.; Hohl, W.

    1991-01-01

    Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

  14. Fault-tolerant Control of Inverter-fed Induction Motor Drives

    DEFF Research Database (Denmark)

    Thybo, C.

    . A description of the different frequency converter components, including models of the inverter, sensors and controllers was given, followed by a fault mode and effect analysis, which points out the potential fault modes of the design. Among the listed fault modes, two were found to be of particular practical...... University, was used as a framework for this work. A short review of the development cycle, including methods for generating and evaluating residuals, was presented. A cost-benefit analysis was proposed, as an extension to the FTC development cycle, to provide a better background for selecting the fault...... bilinear observers. A brief description of threshold- and statistical change detection was included with focus on mean value change detection in a noisy residual. The detection of encoder sensor faults was analysed and three approaches, for encoder fault detection, were proposed. The reference band...

  15. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  16. 20 CFR 410.561b - Fault.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see § 410...

  17. Fault-tolerant controlled quantum secure direct communication over a collective quantum noise channel

    International Nuclear Information System (INIS)

    Yang, Chun-Wei; Hwang, Tzonelih; Tsai, Chia-Wei

    2014-01-01

    This work proposes controlled quantum secure direct communication (CQSDC) over an ideal channel. Based on the proposed CQSDC, two fault-tolerant CQSDC protocols that are robust under two kinds of collective noises, collective-dephasing noise and collective-rotation noise, respectively, are constructed. Due to the use of quantum entanglement of the Bell state (or logical Bell state) as well as dense coding, the proposed protocols provide easier implementation as well as better qubit efficiency than other CQSDC protocols. Furthermore, the proposed protocols are also free from correlation-elicitation attack and other well-known attacks. (paper)

  18. Investigating the Influence of Regional Stress on Fault and Fracture Permeability at Pahute Mesa, Nevada National Security Site

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, Donald M. [Desert Research Inst. (DRI), Reno, NV (United States); Smith, Kenneth D. [Univ. of Nevada, Reno, NV (United States); Parashar, Rishi [Desert Research Inst. (DRI), Reno, NV (United States); Collins, Cheryl [Desert Research Inst. (DRI), Las Vegas, NV (United States); Heintz, Kevin M. [Desert Research Inst. (DRI), Las Vegas, NV (United States)

    2017-05-24

    Regional stress may exert considerable control on the permeability and hydraulic function (i.e., barrier to and/or conduit for fluid flow) of faults and fractures at Pahute Mesa, Nevada National Security Site (NNSS). In-situ measurements of the stress field are sparse in this area, and short period earthquake focal mechanisms are used to delineate principal horizontal stress orientations. Stress field inversion solutions to earthquake focal mechanisms indicate that Pahute Mesa is located within a transtensional faulting regime, represented by oblique slip on steeply dipping normal fault structures, with maximum horizontal stress ranging from N29°E to N63°E and average of N42°E. Average horizontal stress directions are in general agreement with large diameter borehole breakouts from Pahute Mesa analyzed in this study and with stress measurements from other locations on the NNSS.

  19. Fault-tolerant and QoS based Network Layer for Security Management

    Directory of Open Access Journals (Sweden)

    Mohamed Naceur Abdelkrim

    2013-07-01

    Full Text Available Wireless sensor networks have profound effects on many application fields like security management which need an immediate, fast and energy efficient route. In this paper, we define a fault-tolerant and QoS based network layer for security management of chemical products warehouse which can be classified as real-time and mission critical application. This application generate routine data packets and alert packets caused by unusual events which need a high reliability, short end to end delay and low packet loss rate constraints. After each node compute his hop count and build his neighbors table in the initialization phase, packets can be routed to the sink. We use FELGossiping protocol for routine data packets and node-disjoint multipath routing protocol for alert packets. Furthermore, we utilize the information gathering phase of FELGossiping to update the neighbors table and detect the failed nodes, and we adapt the network topology changes by rerun the initialization phase when chemical units were added or removed from the warehouse. Analysis shows that the network layer is energy efficient and can meet the QoS constraints of unusual events packets.

  20. Co-ordination of directional overcurrent protection with load current for parallel feeders

    Energy Technology Data Exchange (ETDEWEB)

    Wright, J.W.; Lloyd, G.; Hindle, P.J. [Alstom, Inc., Stafford (United Kingdom). T and D Protection and Control

    1999-11-01

    Directional phase overcurrent relays are commonly applied at the receiving ends of parallel feeders or transformer feeders. Their purpose is to ensure full discrimination of main or back-up power system overcurrent protection for a fault near the receiving end of one feeder. This paper reviews this type of relay application and highlights load current setting constraints for directional protection. Such constraints have not previously been publicized in well-known text books. A directional relay current setting constraint that is suggested in some text books is based purely on thermal rating considerations for older technology relays. This constraint may not exist with modern numerical relays. In the absence of any apparent constraint, there is a temptation to adopt lower current settings with modern directional relays in relation to reverse load current at the receiving ends of parallel feeders. This paper identifies the danger of adopting very low current settings without any special relay feature to ensure protection security with load current during power system faults. A system incident recorded by numerical relays is also offered to highlight this danger. In cases where there is a need to infringe the identified constraints an implemented and testing relaying technique is proposed.

  1. High Efficiency EBCOT with Parallel Coding Architecture for JPEG2000

    Directory of Open Access Journals (Sweden)

    Chiang Jen-Shiun

    2006-01-01

    Full Text Available This work presents a parallel context-modeling coding architecture and a matching arithmetic coder (MQ-coder for the embedded block coding (EBCOT unit of the JPEG2000 encoder. Tier-1 of the EBCOT consumes most of the computation time in a JPEG2000 encoding system. The proposed parallel architecture can increase the throughput rate of the context modeling. To match the high throughput rate of the parallel context-modeling architecture, an efficient pipelined architecture for context-based adaptive arithmetic encoder is proposed. This encoder of JPEG2000 can work at 180 MHz to encode one symbol each cycle. Compared with the previous context-modeling architectures, our parallel architectures can improve the throughput rate up to 25%.

  2. Secure Route Structures for Parallel Mobile Agents Based Systems Using Fast Binary Dispatch

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2005-01-01

    Full Text Available In a distributed environment, where a large number of computers are connected together to enable the large-scale sharing of data and computing resources, agents, especially mobile agents, are the tools for autonomously completing tasks on behalf of their owners. For applications of large-scale mobile agents, security and efficiency are of great concern. In this paper, we present a fast binary dispatch model and corresponding secure route structures for mobile agents dispatched in parallel to protect the dispatch routes of agents while ensuring the dispatch efficiency. The fast binary dispatch model is simple but efficient with a dispatch complexity of O(log2n. The secure route structures adopt the combination of public-key encryption and digital signature schemes and expose minimal route information to hosts. The nested structure can help detect attacks as early as possible. We evaluated the various models both analytically and empirically.

  3. Transmission grid security

    CERN Document Server

    Haarla, Liisa; Hirvonen, Ritva; Labeau, Pierre-Etienne

    2011-01-01

    In response to the growing importance of power system security and reliability, ""Transmission Grid Security"" proposes a systematic and probabilistic approach for transmission grid security analysis. The analysis presented uses probabilistic safety assessment (PSA) and takes into account the power system dynamics after severe faults. In the method shown in this book the power system states (stable, not stable, system breakdown, etc.) are connected with the substation reliability model. In this way it is possible to: estimate the system-wide consequences of grid faults; identify a chain of eve

  4. Pros and cons of rotating ground motion records to fault-normal/parallel directions for response history analysis of buildings

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2014-01-01

    According to the regulatory building codes in the United States (e.g., 2010 California Building Code), at least two horizontal ground motion components are required for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHAs should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here, for the first time, using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak values of engineering demand parameters (EDPs) were computed for rotation angles ranging from 0 through 180° to quantify the difference between peak values of EDPs over all rotation angles and those due to FN/FP direction rotated motions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  5. What is Security? A perspective on achieving security

    Energy Technology Data Exchange (ETDEWEB)

    Atencio, Julian J.

    2014-05-05

    This presentation provides a perspective on achieving security in an organization. It touches upon security as a mindset, ability to adhere to rules, cultivating awareness of the reason for a security mindset, the quality of a security program, willingness to admit fault or acknowledge failure, peer review in security, science as a model that can be applied to the security profession, the security vision, security partnering, staleness in the security program, security responsibilities, and achievement of success over time despite the impossibility of perfection.

  6. Test generation for digital circuits using parallel processing

    Science.gov (United States)

    Hartmann, Carlos R.; Ali, Akhtar-Uz-Zaman M.

    1990-12-01

    The problem of test generation for digital logic circuits is an NP-Hard problem. Recently, the availability of low cost, high performance parallel machines has spurred interest in developing fast parallel algorithms for computer-aided design and test. This report describes a method of applying a 15-valued logic system for digital logic circuit test vector generation in a parallel programming environment. A concept called fault site testing allows for test generation, in parallel, that targets more than one fault at a given location. The multi-valued logic system allows results obtained by distinct processors and/or processes to be merged by means of simple set intersections. A machine-independent description is given for the proposed algorithm.

  7. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  8. Fault attacks, injection techniques and tools for simulation

    NARCIS (Netherlands)

    Piscitelli, R.; Bhasin, S.; Regazzoni, F.

    2015-01-01

    Faults attacks are a serious threat to secure devices, because they are powerful and they can be performed with extremely cheap equipment. Resistance against fault attacks is often evaluated directly on the manufactured devices, as commercial tools supporting fault evaluation do not usually provide

  9. BLAST in Gid (BiG): A Grid-Enabled Software Architecture and Implementation of Parallel and Sequential BLAST

    International Nuclear Information System (INIS)

    Aparicio, G.; Blanquer, I.; Hernandez, V.; Segrelles, D.

    2007-01-01

    The integration of High-performance computing tools is a key issue in biomedical research. Many computer-based applications have been migrated to High-Performance computers to deal with their computing and storage needs such as BLAST. However, the use of clusters and computing farm presents problems in scalability. The use of a higher layer of parallelism that splits the task into highly independent long jobs that can be executed in parallel can improve the performance maintaining the efficiency. Grid technologies combined with parallel computing resources are an important enabling technology. This work presents a software architecture for executing BLAST in a International Grid Infrastructure that guarantees security, scalability and fault tolerance. The software architecture is modular an adaptable to many other high-throughput applications, both inside the field of bio computing and outside. (Author)

  10. Quaternary Fault Lines

    Data.gov (United States)

    Department of Homeland Security — This data set contains locations and information on faults and associated folds in the United States that are believed to be sources of M>6 earthquakes during the...

  11. Stress near geometrically complex strike-slip faults - Application to the San Andreas fault at Cajon Pass, southern California

    Science.gov (United States)

    Saucier, Francois; Humphreys, Eugene; Weldon, Ray, II

    1992-01-01

    A model is presented to rationalize the state of stress near a geometrically complex major strike-slip fault. Slip on such a fault creates residual stresses that, with the occurrence of several slip events, can dominate the stress field near the fault. The model is applied to the San Andreas fault near Cajon Pass. The results are consistent with the geological features, seismicity, the existence of left-lateral stress on the Cleghorn fault, and the in situ stress orientation in the scientific well, found to be sinistral when resolved on a plane parallel to the San Andreas fault. It is suggested that the creation of residual stresses caused by slip on a wiggle San Andreas fault is the dominating process there.

  12. Problems with earth fault detecting relays assigned to parallel cables or overhead lines; Probleme bei der Erdschlussortung mit wattmetrischen Erdschlussrichtungsrelais bei parallelen Kabeln oder Leitungen

    Energy Technology Data Exchange (ETDEWEB)

    Birkner, P.; Foerg, R. [Lech-Elektrizitaetswerke AG, Augsburg (Germany)

    1998-06-29

    For practical conditions one can find currents in underground electrical conductors like cable coverings earthed on both sides. As an example these currents are due to the alternating current system of the railroad or to the alternating current system of a Peterson coil, that tries to find a minimum resistance way from the transformer station to the place of the earth fault. Currents like these create a series voltage in the cable by inductive coupling. The voltage depends on the type and the length of the cable. The series voltages of all three phases form a zero sequence system. Taking into consideration that two cable systems running parallel to another, under certain circumstances it is possible to achieve a circulating zero sequence current. Additionally there is a shift voltage between the neutral point and the earth in the case of an earth fault in another place in the grid. The combination of these two factors can cause a malfunction of the earth fault detecting relays that are assigned to the parallel cable system. (orig.) [Deutsch] Im Erdreich vorhandene elektrische Leiter, z.B. die beidseitig geerdeten Schirme von Energiekabeln, werden in der Praxis nicht selten von Stroemen beaufschlagt. Dabei kann es sich z.B. auch um den Wechselstrom einer Petersenspule, der sich im Erdschlussfall einen widerstandsminimierten Weg vom Umspannwerk zur Fehlerstelle sucht, handeln. Ueber induktive Einkopplung entsteht im Leiter des Kabels eine Laengsspannung. Deren Hoehe ist vom Kabeltyp und der Kabellaenge abhaengig. Liegt als Netzkonfiguration eine Doppelleitung vor, die parallel betrieben wird, so koennen sich unter gewissen Randbedingungen kreisende Nullstroeme ausbilden. Diese wiederum koennen bei Vorhandensein einer Verlagerungsspannung zu einem Fehlansprechen von wattmetrischen Erdschlussrichtungsrelais fuehren. (orig.)

  13. Research of influence of open-winding faults on properties of brushless permanent magnets motor

    Science.gov (United States)

    Bogusz, Piotr; Korkosz, Mariusz; Powrózek, Adam; Prokop, Jan; Wygonik, Piotr

    2017-12-01

    The paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.

  14. Research of influence of open-winding faults on properties of brushless permanent magnets motor

    Directory of Open Access Journals (Sweden)

    Bogusz Piotr

    2017-12-01

    Full Text Available The paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.

  15. Active strike-slip faulting in El Salvador, Central America

    Science.gov (United States)

    Corti, Giacomo; Carminati, Eugenio; Mazzarini, Francesco; Oziel Garcia, Marvyn

    2005-12-01

    Several major earthquakes have affected El Salvador, Central America, during the Past 100 yr as a consequence of oblique subduction of the Cocos plate under the Caribbean plate, which is partitioned between trench-orthogonal compression and strike-slip deformation parallel to the volcanic arc. Focal mechanisms and the distribution of the most destructive earthquakes, together with geomorphologic evidence, suggest that this transcurrent component of motion may be accommodated by a major strike-slip fault (El Salvador fault zone). We present field geological, structural, and geomorphological data collected in central El Salvador that allow the constraint of the kinematics and the Quaternary activity of this major seismogenic strike-slip fault system. Data suggest that the El Salvador fault zone consists of at least two main ˜E-W fault segments (San Vicente and Berlin segments), with associated secondary synthetic (WNW-ESE) and antithetic (NNW-SSE) Riedel shears and NW-SE tensional structures. The two main fault segments overlap in a dextral en echelon style with the formation of an intervening pull-apart basin. Our original geological and geomorphologic data suggest a late Pleistocene Holocene slip rate of ˜11 mm/yr along the Berlin segment, in contrast with low historical seismicity. The kinematics and rates of deformation suggested by our new data are consistent with models involving slip partitioning during oblique subduction, and support the notion that a trench-parallel component of motion between the Caribbean and Cocos plates is concentrated along E-W dextral strike-slip faults parallel to the volcanic arc.

  16. Middle Miocene E-W tectonic horst structure of Crete through extensional detachment faults

    International Nuclear Information System (INIS)

    Papanikolaou, D; Vassilakis, E

    2008-01-01

    Two east-west trending extensional detachment faults have been recognized in Crete, one with top-to-the-north motion of the hanging wall toward the Cretan Sea and one with top-to-the-south motion of the hanging wall toward the Libyan Sea. The east-west trending zone between these two detachment faults, which forms their common footwall, comprises a tectonic horst formed during Middle Miocene slip on the detachment faults. The detachment faults disrupt the overall tectono-stratigraphic succession of Crete and are localized along pre-existing thrust faults and along particular portions of the stratigraphic sequence, including the transition between the Permo-Triassic Tyros Beds and the base of the Upper Triassic-Eocene carbonate platform of the Tripolis nappe. By recognizing several different tectono-stratigraphic formations within what is generally termed the 'phyllite-quartzite', it is possible to distinguish these extensional detachment faults from thrust faults and minor discontinuities in the sequence. The deformation history of units within Crete can be summarized as: (i) compressional deformation producing arc-parallel east-west trending south-directed thrust faults in Oligocene to Early Miocene time (ii) extensional deformation along arc-parallel, east-west trending detachment faults in Middle Miocene time, with hanging wall motion to the north and south; (iii) Late Miocene-Quaternary extensional deformation along high-angle normal and oblique normal faults that disrupt the older arc-parallel structures

  17. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  18. Motion in the north Iceland volcanic rift zone accommodated by bookshelf faulting

    Science.gov (United States)

    Green, Robert G.; White, Robert S.; Greenfield, Tim

    2014-01-01

    Along mid-ocean ridges the extending crust is segmented on length scales of 10-1,000km. Where rift segments are offset from one another, motion between segments is accommodated by transform faults that are oriented orthogonally to the main rift axis. Where segments overlap, non-transform offsets with a variety of geometries accommodate shear motions. Here we use micro-seismic data to analyse the geometries of faults at two overlapping rift segments exposed on land in north Iceland. Between the rift segments, we identify a series of faults that are aligned sub-parallel to the orientation of the main rift. These faults slip through left-lateral strike-slip motion. Yet, movement between the overlapping rift segments is through right-lateral motion. Together, these motions induce a clockwise rotation of the faults and intervening crustal blocks in a motion that is consistent with a bookshelf-faulting mechanism, named after its resemblance to a tilting row of books on a shelf. The faults probably reactivated existing crustal weaknesses, such as dyke intrusions, that were originally oriented parallel to the main rift and have since rotated about 15° clockwise. Reactivation of pre-existing, rift-parallel weaknesses contrasts with typical mid-ocean ridge transform faults and is an important illustration of a non-transform offset accommodating shear motion between overlapping rift segments.

  19. Fault zone processes in mechanically layered mudrock and chalk

    Science.gov (United States)

    Ferrill, David A.; Evans, Mark A.; McGinnis, Ronald N.; Morris, Alan P.; Smart, Kevin J.; Wigginton, Sarah S.; Gulliver, Kirk D. H.; Lehrmann, Daniel; de Zoeten, Erich; Sickmann, Zach

    2017-04-01

    A 1.5 km long natural cliff outcrop of nearly horizontal Eagle Ford Formation in south Texas exposes northwest and southeast dipping normal faults with displacements of 0.01-7 m cutting mudrock, chalk, limestone, and volcanic ash. These faults provide analogs for both natural and hydraulically-induced deformation in the productive Eagle Ford Formation - a major unconventional oil and gas reservoir in south Texas, U.S.A. - and other mechanically layered hydrocarbon reservoirs. Fault dips are steep to vertical through chalk and limestone beds, and moderate through mudrock and clay-rich ash, resulting in refracted fault profiles. Steeply dipping fault segments contain rhombohedral calcite veins that cross the fault zone obliquely, parallel to shear segments in mudrock. The vertical dimensions of the calcite veins correspond to the thickness of offset competent beds with which they are contiguous, and the slip parallel dimension is proportional to fault displacement. Failure surface characteristics, including mixed tensile and shear segments, indicate hybrid failure in chalk and limestone, whereas shear failure predominates in mudrock and ash beds - these changes in failure mode contribute to variation in fault dip. Slip on the shear segments caused dilation of the steeper hybrid segments. Tabular sheets of calcite grew by repeated fault slip, dilation, and cementation. Fluid inclusion and stable isotope geochemistry analyses of fault zone cements indicate episodic reactivation at 1.4-4.2 km depths. The results of these analyses document a dramatic bed-scale lithologic control on fault zone architecture that is directly relevant to the development of porosity and permeability anisotropy along faults.

  20. Error-free holographic frames encryption with CA pixel-permutation encoding algorithm

    Science.gov (United States)

    Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua

    2018-01-01

    The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.

  1. The Analysis of The Fault of Electrical Power Steering

    Directory of Open Access Journals (Sweden)

    Zhang Li Wen

    2016-01-01

    Full Text Available This paper analysis the common fault types of primary Electrical Power Steering system, meanwhile classify every fault. It provides the basis for further troubleshooting and maintenance. At the same time this paper propose a practical working principle of fault-tolerant, in order to make the EPS system more security and durability.

  2. Dynamical instability produces transform faults at mid-ocean ridges.

    Science.gov (United States)

    Gerya, Taras

    2010-08-27

    Transform faults at mid-ocean ridges--one of the most striking, yet enigmatic features of terrestrial plate tectonics--are considered to be the inherited product of preexisting fault structures. Ridge offsets along these faults therefore should remain constant with time. Here, numerical models suggest that transform faults are actively developing and result from dynamical instability of constructive plate boundaries, irrespective of previous structure. Boundary instability from asymmetric plate growth can spontaneously start in alternate directions along successive ridge sections; the resultant curved ridges become transform faults within a few million years. Fracture-related rheological weakening stabilizes ridge-parallel detachment faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps.

  3. Seismic anisotropy in the vicinity of the Alpine fault, New Zealand, estimated by seismic interferometry

    Science.gov (United States)

    Takagi, R.; Okada, T.; Yoshida, K.; Townend, J.; Boese, C. M.; Baratin, L. M.; Chamberlain, C. J.; Savage, M. K.

    2016-12-01

    We estimate shear wave velocity anisotropy in shallow crust near the Alpine fault using seismic interferometry of borehole vertical arrays. We utilized four borehole observations: two sensors are deployed in two boreholes of the Deep Fault Drilling Project in the hanging wall side, and the other two sites are located in the footwall side. Surface sensors deployed just above each borehole are used to make vertical arrays. Crosscorrelating rotated horizontal seismograms observed by the borehole and surface sensors, we extracted polarized shear waves propagating from the bottom to the surface of each borehole. The extracted shear waves show polarization angle dependence of travel time, indicating shear wave anisotropy between the two sensors. In the hanging wall side, the estimated fast shear wave directions are parallel to the Alpine fault. Strong anisotropy of 20% is observed at the site within 100 m from the Alpine fault. The hanging wall consists of mylonite and schist characterized by fault parallel foliation. In addition, an acoustic borehole imaging reveals fractures parallel to the Alpine fault. The fault parallel anisotropy suggest structural anisotropy is predominant in the hanging wall, demonstrating consistency of geological and seismological observations. In the footwall side, on the other hand, the angle between the fast direction and the strike of the Alpine fault is 33-40 degrees. Since the footwall is composed of granitoid that may not have planar structure, stress induced anisotropy is possibly predominant. The direction of maximum horizontal stress (SHmax) estimated by focal mechanisms of regional earthquakes is 55 degrees of the Alpine fault. Possible interpretation of the difference between the fast direction and SHmax direction is depth rotation of stress field near the Alpine fault. Similar depth rotation of stress field is also observed in the SAFOD borehole at the San Andreas fault.

  4. Field characterization of elastic properties across a fault zone reactivated by fluid injection

    Science.gov (United States)

    Jeanne, Pierre; Guglielmi, Yves; Rutqvist, Jonny; Nussbaum, Christophe; Birkholzer, Jens

    2017-08-01

    We studied the elastic properties of a fault zone intersecting the Opalinus Clay formation at 300 m depth in the Mont Terri Underground Research Laboratory (Switzerland). Four controlled water injection experiments were performed in borehole straddle intervals set at successive locations across the fault zone. A three-component displacement sensor, which allowed capturing the borehole wall movements during injection, was used to estimate the elastic properties of representative locations across the fault zone, from the host rock to the damage zone to the fault core. Young's moduli were estimated by both an analytical approach and numerical finite difference modeling. Results show a decrease in Young's modulus from the host rock to the damage zone by a factor of 5 and from the damage zone to the fault core by a factor of 2. In the host rock, our results are in reasonable agreement with laboratory data showing a strong elastic anisotropy characterized by the direction of the plane of isotropy parallel to the laminar structure of the shale formation. In the fault zone, strong rotations of the direction of anisotropy can be observed. The plane of isotropy can be oriented either parallel to bedding (when few discontinuities are present), parallel to the direction of the main fracture family intersecting the zone, and possibly oriented parallel or perpendicular to the fractures critically oriented for shear reactivation (when repeated past rupture along this plane has created a zone).

  5. Advanced Computational Methods for Security Constrained Financial Transmission Rights: Structure and Parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Elbert, Stephen T.; Kalsi, Karanjit; Vlachopoulou, Maria; Rice, Mark J.; Glaesemann, Kurt R.; Zhou, Ning

    2012-07-26

    Financial Transmission Rights (FTRs) help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, a novel non-linear dynamical system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on large-scale systems using data from the Western Electricity Coordinating Council (WECC). The NDS is demonstrated to outperform the widely used CPLEX algorithms while exhibiting superior scalability. Furthermore, the NDS based solver can be easily parallelized which results in significant computational improvement.

  6. Analysis of Program Obfuscation Schemes with Variable Encoding Technique

    Science.gov (United States)

    Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi

    Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.

  7. Structural setting and kinematics of Nubian fault system, SE Western Desert, Egypt: An example of multi-reactivated intraplate strike-slip faults

    Science.gov (United States)

    Sakran, Shawky; Said, Said Mohamed

    2018-02-01

    Detailed surface geological mapping and subsurface seismic interpretation have been integrated to unravel the structural style and kinematic history of the Nubian Fault System (NFS). The NFS consists of several E-W Principal Deformation Zones (PDZs) (e.g. Kalabsha fault). Each PDZ is defined by spectacular E-W, WNW and ENE dextral strike-slip faults, NNE sinistral strike-slip faults, NE to ENE folds, and NNW normal faults. Each fault zone has typical self-similar strike-slip architecture comprising multi-scale fault segments. Several multi-scale uplifts and basins were developed at the step-over zones between parallel strike-slip fault segments as a result of local extension or contraction. The NNE faults consist of right-stepping sinistral strike-slip fault segments (e.g. Sin El Kiddab fault). The NNE sinistral faults extend for long distances ranging from 30 to 100 kms and cut one or two E-W PDZs. Two nearly perpendicular strike-slip tectonic regimes are recognized in the NFS; an inactive E-W Late Cretaceous - Early Cenozoic dextral transpression and an active NNE sinistral shear.

  8. Bookshelf faulting and transform motion between rift segments of the Northern Volcanic Zone, Iceland

    Science.gov (United States)

    Green, R. G.; White, R. S.; Greenfield, T. S.

    2013-12-01

    Plate spreading is segmented on length scales from 10 - 1,000 kilometres. Where spreading segments are offset, extensional motion has to transfer from one segment to another. In classical plate tectonics, mid-ocean ridge spreading centres are offset by transform faults, but smaller 'non-transform' offsets exist between slightly overlapping spreading centres which accommodate shear by a variety of geometries. In Iceland the mid-Atlantic Ridge is raised above sea level by the Iceland mantle plume, and is divided into a series of segments 20-150 km long. Using microseismicity recorded by a temporary array of 26 three-component seismometers during 2009-2012 we map bookshelf faulting between the offset Askja and Kverkfjöll rift segments in north Iceland. The micro-earthquakes delineate a series of sub-parallel strike-slip faults. Well constrained fault plane solutions show consistent left-lateral motion on fault planes aligned closely with epicentral trends. The shear couple across the transform zone causes left-lateral slip on the series of strike-slip faults sub-parallel to the rift fabric, causing clockwise rotations about a vertical axis of the intervening rigid crustal blocks. This accommodates the overall right-lateral transform motion in the relay zone between the two overlapping volcanic rift segments. The faults probably reactivated crustal weaknesses along the dyke intrusion fabric (parallel to the rift axis) and have since rotated ˜15° clockwise into their present orientation. The reactivation of pre-existing rift-parallel weaknesses is in contrast with mid-ocean ridge transform faults, and is an important illustration of a 'non-transform' offset accommodating shear between overlapping spreading segments.

  9. A Study of Interactions Between Thrust and Strike-slip Faults

    Directory of Open Access Journals (Sweden)

    Jeng-Cheng Wang

    2013-01-01

    Full Text Available A 3-D finite difference method is applied in this study to investigate a spontaneous rupture within a fault system which includes a primary thrust fault and two strike-slip sub-faults. With the occurrence of a rupture on a fault, the rupture condition follows Coulomb¡¦s friction law wherein the stress-slip obeys the slip-weakening fracture criteria. To overcome the geometrical complexity of such a system, a finite difference method is encoded in two different coordinate systems; then, the calculated displacements are connected between the two systems using a 2-D interpolation technique. The rupture is initiated at the center of the main fault under the compression of regional tectonic stresses and then propagates to the boundaries whereby the main fault rupture triggers two strike-slip sub-faults. Simulation results suggest that the triggering of two sub-faults is attributed to two primary factors, regional tectonic stresses and the relative distances between the two sub-faults and the main fault.

  10. Secure RAID Schemes for Distributed Storage

    OpenAIRE

    Huang, Wentao; Bruck, Jehoshua

    2016-01-01

    We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...

  11. Automated fault tree analysis: the GRAFTER system

    International Nuclear Information System (INIS)

    Sancaktar, S.; Sharp, D.R.

    1985-01-01

    An inherent part of probabilistic risk assessment (PRA) is the construction and analysis of detailed fault trees. For this purpose, a fault tree computer graphics code named GRAFTER has been developed. The code system centers around the GRAFTER code. This code is used interactively to construct, store, update and print fault trees of small or large sizes. The SIMON code is used to provide data for the basic event probabilities. ENCODE is used to process the GRAFTER files to prepare input for the WAMCUT code. WAMCUT is used to quantify the top event probability and to identify the cutsets. This code system has been extensively used in various PRA projects. It has resulted in reduced manpower costs, increased QA capability, ease of documentation and it has simplified sensitivity analyses. Because of its automated nature, it is also suitable for LIVING PRA Studies which require updating and modifications during the lifetime of the plant. Brief descriptions and capabilities of the GRAFTER, SIMON and ENCODE codes are provided; an application of the GRAFTER system is outlined; and conclusions and comments on the code system are given

  12. Fault Rupture Model of the 2016 Gyeongju, South Korea, Earthquake and Its Implication for the Underground Fault System

    Science.gov (United States)

    Uchide, Takahiko; Song, Seok Goo

    2018-03-01

    The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.

  13. New evidence on the state of stress of the san andreas fault system.

    Science.gov (United States)

    Zoback, M D; Zoback, M L; Mount, V S; Suppe, J; Eaton, J P; Healy, J H; Oppenheimer, D; Reasenberg, P; Jones, L; Raleigh, C B; Wong, I G; Scotti, O; Wentworth, C

    1987-11-20

    Contemporary in situ tectonic stress indicators along the San Andreas fault system in central California show northeast-directed horizontal compression that is nearly perpendicular to the strike of the fault. Such compression explains recent uplift of the Coast Ranges and the numerous active reverse faults and folds that trend nearly parallel to the San Andreas and that are otherwise unexplainable in terms of strike-slip deformation. Fault-normal crustal compression in central California is proposed to result from the extremely low shear strength of the San Andreas and the slightly convergent relative motion between the Pacific and North American plates. Preliminary in situ stress data from the Cajon Pass scientific drill hole (located 3.6 kilometers northeast of the San Andreas in southern California near San Bernardino, California) are also consistent with a weak fault, as they show no right-lateral shear stress at approximately 2-kilometer depth on planes parallel to the San Andreas fault.

  14. Passive fault current limiting device

    Science.gov (United States)

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  15. An improved low-voltage ride-through performance of DFIG based wind plant using stator dynamic composite fault current limiter.

    Science.gov (United States)

    Gayen, P K; Chatterjee, D; Goswami, S K

    2016-05-01

    In this paper, an enhanced low-voltage ride-through (LVRT) performance of a grid connected doubly fed induction generator (DFIG) has been presented with the usage of stator dynamic composite fault current limiter (SDCFCL). This protection circuit comprises of a suitable series resistor-inductor combination and parallel bidirectional semiconductor switch. The SDCFCL facilitates double benefits such as reduction of rotor induced open circuit voltage due to increased value of stator total inductance and concurrent increase of rotor impedance. Both effects will limit rotor circuit over current and over voltage situation more secured way in comparison to the conventional scheme like the dynamic rotor current limiter (RCL) during any type of fault situation. The proposed concept is validated through the simulation study of the grid integrated 2.0MW DFIG. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Self-triggering superconducting fault current limiter

    Science.gov (United States)

    Yuan, Xing [Albany, NY; Tekletsadik, Kasegn [Rexford, NY

    2008-10-21

    A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

  17. Scissoring Fault Rupture Properties along the Median Tectonic Line Fault Zone, Southwest Japan

    Science.gov (United States)

    Ikeda, M.; Nishizaka, N.; Onishi, K.; Sakamoto, J.; Takahashi, K.

    2017-12-01

    The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

  18. Possible origin and significance of extension-parallel drainages in Arizona's metamophic core complexes

    Science.gov (United States)

    Spencer, J.E.

    2000-01-01

    The corrugated form of the Harcuvar, South Mountains, and Catalina metamorphic core complexes in Arizona reflects the shape of the middle Tertiary extensional detachment fault that projects over each complex. Corrugation axes are approximately parallel to the fault-displacement direction and to the footwall mylonitic lineation. The core complexes are locally incised by enigmatic, linear drainages that parallel corrugation axes and the inferred extension direction and are especially conspicuous on the crests of antiformal corrugations. These drainages have been attributed to erosional incision on a freshly denuded, planar, inclined fault ramp followed by folding that elevated and preserved some drainages on the crests of rising antiforms. According to this hypothesis, corrugations were produced by folding after subacrial exposure of detachment-fault foot-walls. An alternative hypothesis, proposed here, is as follows. In a setting where preexisting drainages cross an active normal fault, each fault-slip event will cut each drainage into two segments separated by a freshly denuded fault ramp. The upper and lower drainage segments will remain hydraulically linked after each fault-slip event if the drainage in the hanging-wall block is incised, even if the stream is on the flank of an antiformal corrugation and there is a large component of strike-slip fault movement. Maintenance of hydraulic linkage during sequential fault-slip events will guide the lengthening stream down the fault ramp as the ramp is uncovered, and stream incision will form a progressively lengthening, extension-parallel, linear drainage segment. This mechanism for linear drainage genesis is compatible with corrugations as original irregularities of the detachment fault, and does not require folding after early to middle Miocene footwall exhumations. This is desirable because many drainages are incised into nonmylonitic crystalline footwall rocks that were probably not folded under low

  19. A low-angle detachment fault revealed: Three-dimensional images of the S-reflector fault zone along the Galicia passive margin

    Science.gov (United States)

    Schuba, C. Nur; Gray, Gary G.; Morgan, Julia K.; Sawyer, Dale S.; Shillington, Donna J.; Reston, Tim J.; Bull, Jonathan M.; Jordan, Brian E.

    2018-06-01

    A new 3-D seismic reflection volume over the Galicia margin continent-ocean transition zone provides an unprecedented view of the prominent S-reflector detachment fault that underlies the outer part of the margin. This volume images the fault's structure from breakaway to termination. The filtered time-structure map of the S-reflector shows coherent corrugations parallel to the expected paleo-extension directions with an average azimuth of 107°. These corrugations maintain their orientations, wavelengths and amplitudes where overlying faults sole into the S-reflector, suggesting that the parts of the detachment fault containing multiple crustal blocks may have slipped as discrete units during its late stages. Another interface above the S-reflector, here named S‧, is identified and interpreted as the upper boundary of the fault zone associated with the detachment fault. This layer, named the S-interval, thickens by tens of meters from SE to NW in the direction of transport. Localized thick accumulations also occur near overlying fault intersections, suggesting either non-uniform fault rock production, or redistribution of fault rock during slip. These observations have important implications for understanding how detachment faults form and evolve over time. 3-D seismic reflection imaging has enabled unique insights into fault slip history, fault rock production and redistribution.

  20. Entanglement enhances security in quantum communication

    International Nuclear Information System (INIS)

    Demkowicz-Dobrzanski, Rafal; Sen, Aditi; Sen, Ujjwal; Lewenstein, Maciej

    2009-01-01

    Secret sharing is a protocol in which a 'boss' wants to send a classical message secretly to two 'subordinates', such that none of the subordinates is able to know the message alone, while they can find it if they cooperate. Quantum mechanics is known to allow for such a possibility. We analyze tolerable quantum bit error rates in such secret sharing protocols in the physically relevant case when the eavesdropping is local with respect to the two channels of information transfer from the boss to the two subordinates. We find that using entangled encoding states is advantageous to legitimate users of the protocol. We therefore find that entanglement is useful for secure quantum communication. We also find that bound entangled states with positive partial transpose are not useful as a local eavesdropping resource. Moreover, we provide a criterion for security in secret sharing--a parallel of the Csiszar-Koerner criterion in single-receiver classical cryptography.

  1. Deformed Fluvial Terraces of Little Rock Creek Capture Off-Fault Strain Adjacent to the Mojave Section of the San Andreas Fault

    Science.gov (United States)

    Moulin, A.; Scharer, K. M.; Cowgill, E.

    2017-12-01

    Examining discrepancies between geodetic and geomorphic slip-rates along major strike-slip faults is essential for understanding both fault behavior and seismic hazard. Recent work on major strike-slip faults has highlighted off-fault deformation and its potential impact on fault slip rates. However, the extent of off-fault deformation along the San Andreas Fault (SAF) remains largely uncharacterized. Along the Mojave section of the SAF, Little Rock Creek drains from south to north across the fault and has cut into alluvial terraces abandoned between 15 and 30 ka1. The surfaces offer a rare opportunity to both characterize how right-lateral slip has accumulated along the SAF over hundreds of seismic cycles, and investigate potential off-fault deformation along secondary structures, where strain accumulates at slower rates. Here we use both field observations and DEM analysis of B4 lidar data to map alluvial and tectonic features, including 9 terrace treads that stand up to 80 m above the modern channel. We interpret the abandonment and preservation of the fluvial terraces to result from episodic capture of Little Rock Creek through gaps in a shutter ridge north of the fault, followed by progressive right deflection of the river course during dextral slip along the SAF. Piercing lines defined by fluvial terrace risers suggest that the amount of right slip since riser formation ranges from 400m for the 15-ka-riser to 1200m for the 30-ka-riser. Where they are best-preserved NE of the SAF, terraces are also cut by NE-facing scarps that trend parallel to the SAF in a zone extending up to 2km from the main fault. Exposures indicate these are fault scarps, with both reverse and normal stratigraphic separation. Geomorphic mapping reveals deflections of both channel and terrace risers (up to 20m) along some of those faults suggesting they could have accommodated a component of right-lateral slip. We estimated the maximum total amount of strike-slip motion recorded by the

  2. Open-Phase Fault Tolerance Techniques of Five-Phase Dual-Rotor Permanent Magnet Synchronous Motor

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    2015-11-01

    Full Text Available Multi-phase motors are gaining more attention due to the advantages of good fault tolerance capability and high power density, etc. By applying dual-rotor technology to multi-phase machines, a five-phase dual-rotor permanent magnet synchronous motor (DRPMSM is researched in this paper to further promote their torque density and fault tolerance capability. It has two rotors and two sets of stator windings, and it can adopt a series drive mode or parallel drive mode. The fault-tolerance capability of the five-phase DRPMSM is researched. All open circuit fault types and corresponding fault tolerance techniques in different drive modes are analyzed. A fault-tolerance control strategy of injecting currents containing a certain third harmonic component is proposed for five-phase DRPMSM to ensure performance after faults in the motor or drive circuit. For adjacent double-phase faults in the motor, based on where the additional degrees of freedom are used, two different fault-tolerance current calculation schemes are adopted and the torque results are compared. Decoupling of the inner motor and outer motor is investigated under fault-tolerant conditions in parallel drive mode. The finite element analysis (FMA results and co-simulation results based on Simulink-Simplorer-Maxwell verify the effectiveness of the techniques.

  3. Enhancing Security of Double Random Phase Encoding Based on Random S-Box

    Science.gov (United States)

    Girija, R.; Singh, Hukum

    2018-06-01

    In this paper, we propose a novel asymmetric cryptosystem for double random phase encoding (DRPE) using random S-Box. While utilising S-Box separately is not reliable and DRPE does not support non-linearity, so, our system unites the effectiveness of S-Box with an asymmetric system of DRPE (through Fourier transform). The uniqueness of proposed cryptosystem lies on employing high sensitivity dynamic S-Box for our DRPE system. The randomness and scalability achieved due to applied technique is an additional feature of the proposed solution. The firmness of random S-Box is investigated in terms of performance parameters such as non-linearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. S-Boxes convey nonlinearity to cryptosystems which is a significant parameter and very essential for DRPE. The strength of proposed cryptosystem has been analysed using various parameters such as MSE, PSNR, correlation coefficient analysis, noise analysis, SVD analysis, etc. Experimental results are conferred in detail to exhibit proposed cryptosystem is highly secure.

  4. Unexpected earthquake hazard revealed by Holocene rupture on the Kenchreai Fault (central Greece): Implications for weak sub-fault shear zones

    Science.gov (United States)

    Copley, Alex; Grützner, Christoph; Howell, Andy; Jackson, James; Penney, Camilla; Wimpenny, Sam

    2018-03-01

    High-resolution elevation models, palaeoseismic trenching, and Quaternary dating demonstrate that the Kenchreai Fault in the eastern Gulf of Corinth (Greece) has ruptured in the Holocene. Along with the adjacent Pisia and Heraion Faults (which ruptured in 1981), our results indicate the presence of closely-spaced and parallel normal faults that are simultaneously active, but at different rates. Such a configuration allows us to address one of the major questions in understanding the earthquake cycle, specifically what controls the distribution of interseismic strain accumulation? Our results imply that the interseismic loading and subsequent earthquakes on these faults are governed by weak shear zones in the underlying ductile crust. In addition, the identification of significant earthquake slip on a fault that does not dominate the late Quaternary geomorphology or vertical coastal motions in the region provides an important lesson in earthquake hazard assessment.

  5. Observer-Based Fault Estimation and Accomodation for Dynamic Systems

    CERN Document Server

    Zhang, Ke; Shi, Peng

    2013-01-01

    Due to the increasing security and reliability demand of actual industrial process control systems, the study on fault diagnosis and fault tolerant control of dynamic systems has received considerable attention. Fault accommodation (FA) is one of effective methods that can be used to enhance system stability and reliability, so it has been widely and in-depth investigated and become a hot topic in recent years. Fault detection is used to monitor whether a fault occurs, which is the first step in FA. On the basis of fault detection, fault estimation (FE) is utilized to determine online the magnitude of the fault, which is a very important step because the additional controller is designed using the fault estimate. Compared with fault detection, the design difficulties of FE would increase a lot, so research on FE and accommodation is very challenging. Although there have been advancements reported on FE and accommodation for dynamic systems, the common methods at the present stage have design difficulties, whi...

  6. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    Science.gov (United States)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  7. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip.

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  8. Two sides of a fault: Grain-scale analysis of pore pressure control on fault slip

    Science.gov (United States)

    Yang, Zhibing; Juanes, Ruben

    2018-02-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

  9. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  10. Negative base encoding in optical linear algebra processors

    Science.gov (United States)

    Perlee, C.; Casasent, D.

    1986-01-01

    In the digital multiplication by analog convolution algorithm, the bits of two encoded numbers are convolved to form the product of the two numbers in mixed binary representation; this output can be easily converted to binary. Attention is presently given to negative base encoding, treating base -2 initially, and then showing that the negative base system can be readily extended to any radix. In general, negative base encoding in optical linear algebra processors represents a more efficient technique than either sign magnitude or 2's complement encoding, when the additions of digitally encoded products are performed in parallel.

  11. Fault Detection/Isolation Verification,

    Science.gov (United States)

    1982-08-01

    63 - A I MCC ’I UNCLASSIFIED SECURITY CLASSIPICATION OP THIS PAGE tMh*f Dal f&mered, REPORT D00CUMENTATION PAGE " .O ORM 1. REPORT NUM.9ft " 2. GOVT...test the performance of th .<ver) DO 2" 1473 EoIoTON OP iNov os i OSoLTe UNCLASSIFIED SECURITY CLASSIPICATION 0 T"IS PAGE (P 3 . at Sted) I...UNCLASSIFIED Acumy, C .AMICATIN Of THIS PAGS. (m ... DO&.m , Algorithm on these netowrks , several different fault scenarios were designed for each network. Each

  12. Fault-Tolerate Three-Party Quantum Secret Sharing over a Collective-Noise Channel

    International Nuclear Information System (INIS)

    Li Chun-Yan; Li Yan-Song

    2011-01-01

    We present a fault-tolerate three-party quantum secret sharing (QSS) scheme over a collective-noise channel. Decoherence-free subspaces are used to tolerate two noise modes, a collective-dephasing channel and a collective-rotating channel, respectively. In this scheme, the boss uses two physical qubits to construct a logical qubit which acts as a quantum channel to transmit one bit information to her two agents. The agents can get the information of the private key established by the boss only if they collaborate. The boss Alice encodes information with two unitary operations. Only single-photon measurements are required to rebuilt Alice's information and detect the security by the agents Bob and Charlie, not Bell-state measurements. Moreover, Almost all of the photons are used to distribute information, and its success efficiency approaches 100% in theory. (general)

  13. SAMPEG: a scene-adaptive parallel MPEG-2 software encoder

    NARCIS (Netherlands)

    Farin, D.S.; Mache, N.; With, de P.H.N.; Girod, B.; Bouman, C.A.; Steinbach, E.G.

    2001-01-01

    This paper presents a fully software-based MPEG-2 encoder architecture, which uses scene-change detection to optimize the Group-of-Picture (GOP) structure for the actual video sequence. This feature enables easy, lossless edit cuts at scene-change positions and it also improves overall picture

  14. Design and Evaluation of a Protection Relay for a Wind Generator Based on the Positive- and Negative-Sequence Fault Components

    DEFF Research Database (Denmark)

    Zheng, T. Y.; Cha, Seung-Tae; Crossley, P. A.

    2013-01-01

    To avoid undesirable disconnection of healthy wind generators (WGs) or a wind power plant, a WG protection relay should discriminate among faults, so that it can operate instantaneously for WG, connected feeder or connection bus faults, it can operate after a delay for inter-tie or grid faults......, and it can avoid operating for parallel WG or adjacent feeder faults. A WG protection relay based on the positive- and negativesequence fault components is proposed in the paper. At stage 1, the proposed relay uses the magnitude of the positive-sequence component in the fault current to distinguish faults...... at a parallel WG connected to the same feeder or at an adjacent feeder, from other faults at a connected feeder, an inter-tie, or a grid. At stage 2, the fault type is first determined using the relationships between the positive- and negative-sequence fault components. Then, the relay differentiates between...

  15. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    Science.gov (United States)

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  16. Non-Andersonian conjugate strike-slip faults: Observations, theory, and tectonic implications

    International Nuclear Information System (INIS)

    Yin, A; Taylor, M H

    2008-01-01

    Formation of conjugate strike-slip faults is commonly explained by the Anderson fault theory, which predicts a X-shaped conjugate fault pattern with an intersection angle of ∼30 degrees between the maximum compressive stress and the faults. However, major conjugate faults in Cenozoic collisional orogens, such as the eastern Alps, western Mongolia, eastern Turkey, northern Iran, northeastern Afghanistan, and central Tibet, contradict the theory in that the conjugate faults exhibit a V-shaped geometry with intersection angles of 60-75 degrees, which is 30-45 degrees greater than that predicted by the Anderson fault theory. In Tibet and Mongolia, geologic observations can rule out bookshelf faulting, distributed deformation, and temporal changes in stress state as explanations for the abnormal fault patterns. Instead, the GPS-determined velocity field across the conjugate fault zones indicate that the fault formation may have been related to Hagen-Poiseuille flow in map view involving the upper crust and possibly the whole lithosphere based on upper mantle seismicity in southern Tibet and basaltic volcanism in Mongolia. Such flow is associated with two coeval and parallel shear zones having opposite shear sense; each shear zone produce a set of Riedel shears, respectively, and together the Riedel shears exhibit the observed non-Andersonian conjugate strike-slip fault pattern. We speculate that the Hagen-Poiseuille flow across the lithosphere that hosts the conjugate strike-slip zones was produced by basal shear traction related to asthenospheric flow, which moves parallel and away from the indented segment of the collisional fronts. The inferred asthenospheric flow pattern below the conjugate strike-slip fault zones is consistent with the magnitude and orientations of seismic anisotropy observed across the Tibetan and Mongolian conjugate fault zones, suggesting a strong coupling between lithospheric deformation and asthenospheric flow. The laterally moving

  17. Non-Andersonian conjugate strike-slip faults: Observations, theory, and tectonic implications

    Energy Technology Data Exchange (ETDEWEB)

    Yin, A [Department of Earth and Space Sciences and Institute of Geophysics and Planetary Physics, University of California, Los Angeles, Los Angeles, CA 90025-1567 (United States); Taylor, M H [Department of Geology, University of Kansas, 1475 Jayhawk Blvd., Lawrence, KS 66044 (United States)], E-mail: yin@ess.ucla.edu

    2008-07-01

    Formation of conjugate strike-slip faults is commonly explained by the Anderson fault theory, which predicts a X-shaped conjugate fault pattern with an intersection angle of {approx}30 degrees between the maximum compressive stress and the faults. However, major conjugate faults in Cenozoic collisional orogens, such as the eastern Alps, western Mongolia, eastern Turkey, northern Iran, northeastern Afghanistan, and central Tibet, contradict the theory in that the conjugate faults exhibit a V-shaped geometry with intersection angles of 60-75 degrees, which is 30-45 degrees greater than that predicted by the Anderson fault theory. In Tibet and Mongolia, geologic observations can rule out bookshelf faulting, distributed deformation, and temporal changes in stress state as explanations for the abnormal fault patterns. Instead, the GPS-determined velocity field across the conjugate fault zones indicate that the fault formation may have been related to Hagen-Poiseuille flow in map view involving the upper crust and possibly the whole lithosphere based on upper mantle seismicity in southern Tibet and basaltic volcanism in Mongolia. Such flow is associated with two coeval and parallel shear zones having opposite shear sense; each shear zone produce a set of Riedel shears, respectively, and together the Riedel shears exhibit the observed non-Andersonian conjugate strike-slip fault pattern. We speculate that the Hagen-Poiseuille flow across the lithosphere that hosts the conjugate strike-slip zones was produced by basal shear traction related to asthenospheric flow, which moves parallel and away from the indented segment of the collisional fronts. The inferred asthenospheric flow pattern below the conjugate strike-slip fault zones is consistent with the magnitude and orientations of seismic anisotropy observed across the Tibetan and Mongolian conjugate fault zones, suggesting a strong coupling between lithospheric deformation and asthenospheric flow. The laterally moving

  18. Security Information System Digital Simulation

    OpenAIRE

    Tao Kuang; Shanhong Zhu

    2015-01-01

    The study built a simulation model for the study of food security information system relay protection. MATLAB-based simulation technology can support the analysis and design of food security information systems. As an example, the food security information system fault simulation, zero-sequence current protection simulation and transformer differential protection simulation are presented in this study. The case studies show that the simulation of food security information system relay protect...

  19. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    Science.gov (United States)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  20. Data Encoding using Periodic Nano-Optical Features

    Science.gov (United States)

    Vosoogh-Grayli, Siamack

    Successful trials have been made through a designed algorithm to quantize, compress and optically encode unsigned 8 bit integer values in the form of images using Nano optical features. The periodicity of the Nano-scale features (Nano-gratings) have been designed and investigated both theoretically and experimentally to create distinct states of variation (three on states and one off state). The use of easy to manufacture and machine readable encoded data in secured authentication media has been employed previously in bar-codes for bi-state (binary) models and in color barcodes for multiple state models. This work has focused on implementing 4 states of variation for unit information through periodic Nano-optical structures that separate an incident wavelength into distinct colors (variation states) in order to create an encoding system. Compared to barcodes and magnetic stripes in secured finite length storage media the proposed system encodes and stores more data. The benefits of multiple states of variation in an encoding unit are 1) increased numerically representable range 2) increased storage density and 3) decreased number of typical set elements for any ergodic or semi-ergodic source that emits these encoding units. A thorough investigation has targeted the effects of the use of multi-varied state Nano-optical features on data storage density and consequent data transmission rates. The results show that use of Nano-optical features for encoding data yields a data storage density of circa 800 Kbits/in2 via the implementation of commercially available high resolution flatbed scanner systems for readout. Such storage density is far greater than commercial finite length secured storage media such as Barcode family with maximum practical density of 1kbits/in2 and highest density magnetic stripe cards with maximum density circa 3 Kbits/in2. The numerically representable range of the proposed encoding unit for 4 states of variation is [0 255]. The number of

  1. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    Science.gov (United States)

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  2. Quaternary faulting in the Tatra Mountains, evidence from cave morphology and fault-slip analysis

    Directory of Open Access Journals (Sweden)

    Szczygieł Jacek

    2015-06-01

    Full Text Available Tectonically deformed cave passages in the Tatra Mts (Central Western Carpathians indicate some fault activity during the Quaternary. Displacements occur in the youngest passages of the caves indicating (based on previous U-series dating of speleothems an Eemian or younger age for those faults, and so one tectonic stage. On the basis of stress analysis and geomorphological observations, two different mechanisms are proposed as responsible for the development of these displacements. The first mechanism concerns faults that are located above the valley bottom and at a short distance from the surface, with fault planes oriented sub-parallel to the slopes. The radial, horizontal extension and vertical σ1 which is identical with gravity, indicate that these faults are the result of gravity sliding probably caused by relaxation after incision of valleys, and not directly from tectonic activity. The second mechanism is tilting of the Tatra Mts. The faults operated under WNW-ESE oriented extension with σ1 plunging steeply toward the west. Such a stress field led to normal dip-slip or oblique-slip displacements. The faults are located under the valley bottom and/or opposite or oblique to the slopes. The process involved the pre-existing weakest planes in the rock complex: (i in massive limestone mostly faults and fractures, (ii in thin-bedded limestone mostly inter-bedding planes. Thin-bedded limestones dipping steeply to the south are of particular interest. Tilting toward the N caused the hanging walls to move under the massif and not toward the valley, proving that the cause of these movements was tectonic activity and not gravity.

  3. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.

    Science.gov (United States)

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-10-13

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.

  4. Test results of fault current limiter using YBCO tapes with shunt protection

    Energy Technology Data Exchange (ETDEWEB)

    Baldan, Carlos A; Lamas, Jerika S; Shigue, Carlos Y [Escola de Engenharia de Lorena, EEL USP, Lorena - SP (Brazil); Filho, Ernesto Ruppert, E-mail: cabaldan@gmail.co [Faculdade de Engenharia Eletrica, FEEC Unicamp, Campinas - SP (Brazil)

    2010-06-01

    A Fault Current Limiter (FCL) based on high temperature superconducting elements with four tapes in parallel were designed and tested in 220 V line for a fault current peak between 1 kA to 4 kA. The elements employed second generation (2G) HTS tapes of YBCO coated conductor with stainless steel reinforcement. The tapes were electrically connected in parallel with effective length of 0.4 m per element (16 elements connected in series) constituting a single-phase unit. The FCL performance was evaluated through over-current tests and its recovery characteristics under load current were analyzed using optimized value of the shunt protection. The projected limiting ratio achieved a factor higher than 4 during fault of 5 cycles without degradation. Construction details and further test results will be shown in the paper.

  5. Landforms along transverse faults parallel to axial zone of folded ...

    Indian Academy of Sciences (India)

    Himalaya, along the Kali River valley, is defined by folded hanging wall ... role of transverse fault tectonics in the formation of the curvature cannot be ruled out. 1. .... Piedmont surface is made up of gravelliferous and ... made to compute the wedge failure analysis (Hoek .... (∼T2) is at the elevation of ∼272 m asl measured.

  6. Efficient Synchronization Stability Metrics for Fault Clearing

    Energy Technology Data Exchange (ETDEWEB)

    Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bienstock, Daniel [Columbia Univ., New York, NY (United States); Krishnamurthy, Dvijotham [Univ. of Washington, Seattle, WA (United States)

    2015-02-12

    Direct methods can provide rapid screening of the dynamical security of large numbers fault and contingency scenarios by avoiding extensive time simulation. We introduce a computationally-efficient direct method based on optimization that leverages efficient cutting plane techniques. The method considers both unstable equilibrium points and the effects of additional relay tripping on dynamical security[1]. Similar to other direct methods, our approach yields conservative results for dynamical security, however, the optimization formulation potentially lends itself to the inclusion of additional constraints to reduce this conservatism.

  7. EKF-based fault detection for guided missiles flight control system

    Science.gov (United States)

    Feng, Gang; Yang, Zhiyong; Liu, Yongjin

    2017-03-01

    The guided missiles flight control system is essential for guidance accuracy and kill probability. It is complicated and fragile. Since actuator faults and sensor faults could seriously affect the security and reliability of the system, fault detection for missiles flight control system is of great significance. This paper deals with the problem of fault detection for the closed-loop nonlinear model of the guided missiles flight control system in the presence of disturbance. First, set up the fault model of flight control system, and then design the residual generation based on the extended Kalman filter (EKF) for the Eulerian-discrete fault model. After that, the Chi-square test was selected for the residual evaluation and the fault detention task for guided missiles closed-loop system was accomplished. Finally, simulation results are provided to illustrate the effectiveness of the approach proposed in the case of elevator fault separately.

  8. Experiences of pathways, outcomes and choice after severe traumatic brain injury under no-fault versus fault-based motor accident insurance.

    Science.gov (United States)

    Harrington, Rosamund; Foster, Michele; Fleming, Jennifer

    2015-01-01

    To explore experiences of pathways, outcomes and choice after motor vehicle accident (MVA) acquired severe traumatic brain injury (sTBI) under fault-based vs no-fault motor accident insurance (MAI). In-depth qualitative interviews with 10 adults with sTBI and 17 family members examined experiences of pathways, outcomes and choice and how these were shaped by both compensable status and interactions with service providers and service funders under a no-fault and a fault-based MAI scheme. Participants were sampled to provide variation in compensable status, injury severity, time post-injury and metropolitan vs regional residency. Interviews were recorded, transcribed and thematically analysed to identify dominant themes under each scheme. Dominant themes emerging under the no-fault scheme included: (a) rehabilitation-focused pathways; (b) a sense of security; and (c) bounded choices. Dominant themes under the fault-based scheme included: (a) resource-rationed pathways; (b) pressured lives; and (c) unknown choices. Participants under the no-fault scheme experienced superior access to specialist rehabilitation services, greater surety of support and more choice over how rehabilitation and life-time care needs were met. This study provides valuable insights into individual experiences under fault-based vs no-fault MAI. Implications for an injury insurance scheme design to optimize pathways, outcomes and choice after sTBI are discussed.

  9. A distributed fault tolerant architecture for nuclear reactor control and safety functions

    International Nuclear Information System (INIS)

    Hecht, M.; Agron, J.; Hochhauser, S.

    1989-01-01

    This paper reports on a fault tolerance architecture that provides tolerance to a broad scope of hardware, software, and communications faults which is being developed. This architecture relies on widely commercially available operating systems, local area networks, and software standards. Thus, development time is significantly shortened, and modularity allows for continuous and inexpensive system enhancement throughout the expected 20- year life. The fault containment and parallel processing capabilites of computers network are being exploited to provide a high performance, high availability network capable of tolerating a broad scope of hardware software, and operating system faults. The system can tolerate all but one known (and avoidable) single fault, two known and avoidable dual faults, and will detect all higher order fault sequences and provide diagnostics to allow for rapid manual recovery

  10. Reset Tree-Based Optical Fault Detection

    Directory of Open Access Journals (Sweden)

    Howon Kim

    2013-05-01

    Full Text Available In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit’s reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool.

  11. Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2

    Science.gov (United States)

    2016-06-01

    specification of fault propagation in EMV2 corresponds to the Fault Propagation and Transformation Calculus (FPTC) [Paige 2009]. The following concepts...definition of security includes acci- dental malicious indication of anomalous behavior either from outside a system or by unauthor- ized crossing of a

  12. Tectono-denudation process of Nanxiong fault and its relations to uranium metallogenesis

    International Nuclear Information System (INIS)

    Chen Yuehui

    1994-01-01

    A large mylonite zone is distributed on the foot wall of Nanxiong fault, which is parallel to the fault. On the hanging wall, there is a Meso-Cenozoic basin. As sedimentation centers are moved towards the side of the fault, the strata become more younger. By investigation of the mylonite zone along the profile on the foot wall of the fault, the author studies in detail various kinds of ductile deformation fabrics in mylonite such as S-C fabrics, rotational porphyroclasts and stretching lineation etc. In the light of the kinematic direction of deformation fabrics, together with the characteristics of brittle tectonites in the fault and the distribution of normal faults in the basin, the author believes that Nanxiong fault is a big-size denudation fault. According to the formation and evolution of the denudation fault, its rock and ore-controlling roles, the relations between the fault and uranium metallogenesis are also preliminarily discussed

  13. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  14. Gear-box fault detection using time-frequency based methods

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2015-01-01

    Gear-box fault monitoring and detection is important for optimization of power generation and availability of wind turbines. The current industrial approach is to use condition monitoring systems, which runs in parallel with the wind turbine control system, using expensive additional sensors...... in the gear-box resonance frequency can be detected. Two different time–frequency based approaches are presented in this paper. One is a filter based approach and the other is based on a Karhunen–Loeve basis. Both of them detect the gear-box fault with an acceptable detection delay of maximum 100s, which...... is neglectable compared with the fault developing time....

  15. A study on the short-circuit test by fault angle control and the recovery characteristics of the fault current limiter using coated conductor

    International Nuclear Information System (INIS)

    Park, D.K.; Kim, Y.J.; Ahn, M.C.; Yang, S.E.; Seok, B.-Y.; Ko, T.K.

    2007-01-01

    Superconducting fault current limiters (SFCLs) have been developed in many countries, and they are expected to be used in the recent electric power systems, because of their great efficiency for operating these power system stably. It is necessary for resistive FCLs to generate resistance immediately and to have a fast recovery characteristic after the fault clearance, because of re-closing operation. Short-circuit tests are performed to obtained current limiting operational and recovery characteristics of the FCL by a fault controller using a power switching device. The power switching device consists of anti-parallel connected thyristors. The fault occurs at the desired angle by controlling the firing angle of thyristors. Resistive SFCLs have different current limiting characteristics with respect to the fault angle in the first swing during the fault. This study deals with the short-circuit characteristic of FCL coils using two different YBCO coated conductors (CCs), 344 and 344s, by controlling the fault angle and experimental studies on the recovery characteristic by a small current flowing through the SFCL after the fault clearance. Tests are performed at various voltages applied to the SFCL in a saturated liquid nitrogen cooling system

  16. Sequence Algebra, Sequence Decision Diagrams and Dynamic Fault Trees

    International Nuclear Information System (INIS)

    Rauzy, Antoine B.

    2011-01-01

    A large attention has been focused on the Dynamic Fault Trees in the past few years. By adding new gates to static (regular) Fault Trees, Dynamic Fault Trees aim to take into account dependencies among events. Merle et al. proposed recently an algebraic framework to give a formal interpretation to these gates. In this article, we extend Merle et al.'s work by adopting a slightly different perspective. We introduce Sequence Algebras that can be seen as Algebras of Basic Events, representing failures of non-repairable components. We show how to interpret Dynamic Fault Trees within this framework. Finally, we propose a new data structure to encode sets of sequences of Basic Events: Sequence Decision Diagrams. Sequence Decision Diagrams are very much inspired from Minato's Zero-Suppressed Binary Decision Diagrams. We show that all operations of Sequence Algebras can be performed on this data structure.

  17. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network

    Science.gov (United States)

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-01-01

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386

  18. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2016-10-01

    Full Text Available Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB and a Lowest False Positive criterion (LFP, for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.

  19. Bathymetric Signatures of Oceanic Detachment Faulting and Potential Ultramafic Lithologies at Outcrop or in the Shallow Subseafloor

    Science.gov (United States)

    Cann, J. R.; Smith, D. K.; Escartin, J.; Schouten, H.

    2008-12-01

    For ten years, domal bathymetric features capped by corrugated and striated surfaces have been recognized as exposures of oceanic detachment faults, and hence potentially as exposures of plutonic rocks from lower crust or upper mantle. Associated with these domes are other bathymetric features that indicate the presence of detachment faulting. Taken together these bathymetric signatures allow the mapping of large areas of detachment faulting at slow and intermediate spreading ridges, both at the axis and away from it. These features are: 1. Smooth elevated domes corrugated parallel to the spreading direction, typically 10-30 km wide parallel to the axis; 2. Linear ridges with outward-facing slopes steeper than 20°, running parallel to the spreading axis, typically 10-30 km long; 3. Deep basins with steep sides and relatively flat floors, typically 10-20 km long parallel to the spreading axis and 5-10 km wide. This characteristic bathymetric association arises from the rolling over of long-lived detachment faults as they spread away from the axis. The faults dip steeply close to their origin at a few kilometers depth near the spreading axis, and rotate to shallow dips as they continue to evolve, with associated footwall flexure and rotation of rider blocks carried on the fault surface. The outward slopes of the linear ridges can be shown to be rotated volcanic seafloor transported from the median valley floor. The basins may be formed by the footwall flexure, and may be exposures of the detachment surface. Critical in this analysis is that the corrugated domes are not the only sites of detachment faulting, but are the places where higher parts of much more extensive detachment faults happen to be exposed. The fault plane rises and falls along axis, and in some places is covered by rider blocks, while in others it is exposed at the sea floor. We use this association to search for evidence for detachment faulting in existing surveys, identifying for example an area

  20. Implementations of a four-level mechanical architecture for fault-tolerant robots

    International Nuclear Information System (INIS)

    Hooper, Richard; Sreevijayan, Dev; Tesar, Delbert; Geisinger, Joseph; Kapoor, Chelan

    1996-01-01

    This paper describes a fault tolerant mechanical architecture with four levels devised and implemented in concert with NASA (Tesar, D. and Sreevijayan, D., Four-level fault tolerance in manipulator design for space operations. In First Int. Symp. Measurement and Control in Robotics (ISMCR '90), Houston, Texas, 20-22 June 1990.) Subsequent work has clarified and revised the architecture. The four levels proceed from fault tolerance at the actuator level, to fault tolerance via in-parallel chains, to fault tolerance using serial kinematic redundancy, and finally to the fault tolerance multiple arm systems provide. This is a subsumptive architecture because each successive layer can incorporate the fault tolerance provided by all layers beneath. For instance a serially-redundant robot can incorporate dual fault-tolerant actuators. Redundant systems provide the fault tolerance, but the guiding principle of this architecture is that functional redundancies actively increase the performance of the system. Redundancies do not simply remain dormant until needed. This paper includes specific examples of hardware and/or software implementation at all four levels

  1. Study on the scope of fault tree method applicability

    International Nuclear Information System (INIS)

    Ito, Taiju

    1980-03-01

    In fault tree analysis of the reliability of nuclear safety system, including reliability analysis of nuclear protection system, there seem to be some documents in which application of the fault tree method is unreasonable. In fault tree method, the addition rule and the multiplication rule are usually used. The addition rule and the multiplication rule must hold exactly or at least practically. The addition rule has no problem but the multiplication rule has occasionally some problem. For unreliability, mean unavailability and instantaneous unavailability of the elements, holding or not of the multiplication rule has been studied comprehensively. Between the unreliability of each element without maintenance, the multiplication rule holds. Between the instantaneous unavailability of each element, with maintenance or not, the multiplication rule also holds. Between the unreliability of each subsystem with maintenance, however, the multiplication rule does not hold, because the product value is larger than the value of unreliability for a parallel system consisting of the two subsystems with maintenance. Between the mean unavailability of each element without maintenance, the multiplication rule also does not hold, because the product value is smaller than the value of mean unavailability for a parallel system consisting of the two elements without maintenance. In these cases, therefore, the fault tree method may not be applied by rote for reliability analysis of the system. (author)

  2. Variations in strength and slip rate along the san andreas fault system.

    Science.gov (United States)

    Jones, C H; Wesnousky, S G

    1992-04-03

    Convergence across the San Andreas fault (SAF) system is partitioned between strike-slip motion on the vertical SAF and oblique-slip motion on parallel dip-slip faults, as illustrated by the recent magnitude M(s) = 6.0 Palm Springs, M(s) = 6.7 Coalinga, and M(s) = 7.1 Loma Prieta earthquakes. If the partitioning of slip minimizes the work done against friction, the direction of slip during these recent earthquakes depends primarily on fault dip and indicates that the normal stress coefficient and frictional coefficient (micro) vary among the faults. Additionally, accounting for the active dip-slip faults reduces estimates of fault slip rates along the vertical trace of the SAF by about 50 percent in the Loma Prieta and 100 percent in the North Palm Springs segments.

  3. Early fault detection and diagnosis for nuclear power plants

    International Nuclear Information System (INIS)

    Berg, O.; Grini, R.; Masao Yokobayashi

    1988-01-01

    Fault detection based on a number of reference models is demonstrated. This approach is characterized by the possibility of detecting faults before a traditional alarm system is triggered, even in dynamic situations. Further, by a proper decomposition scheme and use of available process measurements, the problem area can be confined to the faulty process parts. A diagnosis system using knowledge engineering techniques is described. Typical faults are classified and described by rules involving alarm patterns and variations of important parameters. By structuring the fault hypotheses in a hierarchy the search space is limited which is important for real time diagnosis. Introduction of certainty factors improve the flexibility and robustness of diagnosis by exploring parallel problems even when some data are missing. A new display proposal should facilitate the operator interface and the integration of fault detection and diagnosis tasks in disturbance handling. The techniques of early fault detection and diagnosis are presently being implemented and tested in the experimental control room of a full-scope PWR simulator in Halden

  4. Hybrid algorithm for rotor angle security assessment in power systems

    Directory of Open Access Journals (Sweden)

    D. Prasad Wadduwage

    2015-08-01

    Full Text Available Transient rotor angle stability assessment and oscillatory rotor angle stability assessment subsequent to a contingency are integral components of dynamic security assessment (DSA in power systems. This study proposes a hybrid algorithm to determine whether the post-fault power system is secure due to both transient rotor angle stability and oscillatory rotor angle stability subsequent to a set of known contingencies. The hybrid algorithm first uses a new security measure developed based on the concept of Lyapunov exponents (LEs to determine the transient security of the post-fault power system. Later, the transient secure power swing curves are analysed using an improved Prony algorithm which extracts the dominant oscillatory modes and estimates their damping ratios. The damping ratio is a security measure about the oscillatory security of the post-fault power system subsequent to the contingency. The suitability of the proposed hybrid algorithm for DSA in power systems is illustrated using different contingencies of a 16-generator 68-bus test system and a 50-generator 470-bus test system. The accuracy of the stability conclusions and the acceptable computational burden indicate that the proposed hybrid algorithm is suitable for real-time security assessment with respect to both transient rotor angle stability and oscillatory rotor angle stability under multiple contingencies of the power system.

  5. Application of ENN-1 for Fault Diagnosis of Wind Power Systems

    Directory of Open Access Journals (Sweden)

    Meng-Hui Wang

    2012-01-01

    Full Text Available Maintaining a wind turbine and ensuring secure is not easy because of long-term exposure to the environment and high installation locations. Wind turbines need fully functional condition-monitoring and fault diagnosis systems that prevent accidents and reduce maintenance costs. This paper presents a simulator design for fault diagnosis of wind power systems and further proposes some fault diagnosis technologies such as signal analysis, feature selecting, and diagnosis methods. First, this paper uses a wind power simulator to produce fault conditions and features from the monitoring sensors. Then an extension neural network type-1- (ENN-1- based method is proposed to develop the core of the fault diagnosis system. The proposed system will benefit the development of real fault diagnosis systems with testing models that demonstrate satisfactory results.

  6. Protection algorithm for a wind turbine generator based on positive- and negative-sequence fault components

    DEFF Research Database (Denmark)

    Zheng, Tai-Ying; Cha, Seung-Tae; Crossley, Peter A.

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on positive- and negative-sequence fault components is proposed in the paper. The relay uses the magnitude of the positive-sequence component in the fault current to detect a fault on a parallel WTG, connected to the same power collection...... feeder, or a fault on an adjacent feeder; but for these faults, the relay remains stable and inoperative. A fault on the power collection feeder or a fault on the collection bus, both of which require an instantaneous tripping response, are distinguished from an inter-tie fault or a grid fault, which...... in the fault current is used to decide on either instantaneous or delayed operation. The operating performance of the relay is then verified using various fault scenarios modelled using EMTP-RV. The scenarios involve changes in the position and type of fault, and the faulted phases. Results confirm...

  7. Sensor and Actuator Fault-Hiding Reconfigurable Control Design for a Four-Tank System Benchmark

    DEFF Research Database (Denmark)

    Hameed, Ibrahim; El-Madbouly, Esam I; Abdo, Mohamed I

    2015-01-01

    Invariant (LTI) system where virtual sensors and virtual actuators are used to correct faulty performance through the use of a pre-fault performance. Simulation results showed that the developed approach can handle different types of faults and able to completely and instantly recover the original system......Fault detection and compensation plays a key role to fulfill high demands for performance and security in today's technological systems. In this paper, a fault-hiding (i.e., tolerant) control scheme that detects and compensates for actuator and sensor faults in a four-tank system benchmark...

  8. Demonstration of an optoelectronic interconnect architecture for a parallel modified signed-digit adder and subtracter

    Science.gov (United States)

    Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.

    1996-06-01

    A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.

  9. What does fault tolerant Deep Learning need from MPI?

    Energy Technology Data Exchange (ETDEWEB)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.; Daily, Jeffrey A.

    2017-09-25

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for a fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.

  10. Geometry and Kinematics of the Lopukangri Fault System: Implications for Internal Deformation of the Tibetan Plateau

    Science.gov (United States)

    Murphy, M. A.; Taylor, M. H.

    2006-12-01

    Karakoram fault between 32°N to 30°N shows that its slip direction swings to more easterly orientations from north to south, paralleling the trace of the Himalayan thrust belt. We present a preliminary kinematic model to explain the fault slip data and regional geometry of these fault systems that incorporates eastward translation and counterclockwise rotation of a semi-triangular-shaped block. The Karakoram fault, the Dangardzong-Lopukangri fault system, and the Awong Co fault represent the major block boundaries. Although there is internal deformation within the block, inspection of satellite imagery and geologic maps suggests it is minor. We hypothesize that this strain pattern results from radial expansion of the Himalayan arc that causes regions within it to undergo arc-parallel stretching as well as arc-normal shortening. In this scenario rotation facilitates arc-normal shortening and arc-parallel stretching between south- western Tibetan plateau and Himalayan fold-thrust belt.

  11. Fenix, A Fault Tolerant Programming Framework for MPI Applications

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-05

    Fenix provides APIs to allow the users to add fault tolerance capability to MPI-based parallel programs in a transparent manner. Fenix-enabled programs can run through process failures during program execution using a pool of spare processes accommodated by Fenix.

  12. Recent tectonic stress field, active faults and geothermal fields (hot-water type) in China

    Science.gov (United States)

    Wan, Tianfeng

    1984-10-01

    It is quite probable that geothermal fields of the hot-water type in China do not develop in the absence of recently active faults. Such active faults are all controlled by tectonic stress fields. Using the data of earthquake fault-plane solutions, active faults, and surface thermal manifestations, a map showing the recent tectonic stress field, and the location of active faults and geothermal fields in China is presented. Data collected from 89 investigated prospects with geothermal manifestations indicate that the locations of geothermal fields are controlled by active faults and the recent tectonic stress field. About 68% of the prospects are controlled by tensional or tensional-shear faults. The angle between these faults and the direction of maximum compressive stress is less than 45°, and both tend to be parallel. About 15% of the prospects are controlled by conjugate faults. Another 14% are controlled by compressive-shear faults where the angle between these faults and the direction maximum compressive stress is greater than 45°.

  13. A 3D modeling approach to complex faults with multi-source data

    Science.gov (United States)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  14. Fault ride-through enhancement using an enhanced field oriented control technique for converters of grid connected DFIG and STATCOM for different types of faults.

    Science.gov (United States)

    Ananth, D V N; Nagesh Kumar, G V

    2016-05-01

    With increase in electric power demand, transmission lines were forced to operate close to its full load and due to the drastic change in weather conditions, thermal limit is increasing and the system is operating with less security margin. To meet the increased power demand, a doubly fed induction generator (DFIG) based wind generation system is a better alternative. For improving power flow capability and increasing security STATCOM can be adopted. As per modern grid rules, DFIG needs to operate without losing synchronism called low voltage ride through (LVRT) during severe grid faults. Hence, an enhanced field oriented control technique (EFOC) was adopted in Rotor Side Converter of DFIG converter to improve power flow transfer and to improve dynamic and transient stability. A STATCOM is coordinated to the system for obtaining much better stability and enhanced operation during grid fault. For the EFOC technique, rotor flux reference changes its value from synchronous speed to zero during fault for injecting current at the rotor slip frequency. In this process DC-Offset component of flux is controlled, decomposition during symmetric and asymmetric faults. The offset decomposition of flux will be oscillatory in a conventional field oriented control, whereas in EFOC it was aimed to damp quickly. This paper mitigates voltage and limits surge currents to enhance the operation of DFIG during symmetrical and asymmetrical faults. The system performance with different types of faults like single line to ground, double line to ground and triple line to ground was applied and compared without and with a STATCOM occurring at the point of common coupling with fault resistance of a very small value at 0.001Ω. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Implementation of a microcomputer based distance relay for parallel transmission lines

    International Nuclear Information System (INIS)

    Phadke, A.G.; Jihuang, L.

    1986-01-01

    Distance relaying for parallel transmission lines is a difficult application problem with conventional phase and ground distance relays. It is known that for cross-country faults involving dissimilar phases and ground, three phase tripping may result. This paper summarizes a newly developed microcomputer based relay which is capable of classifying the cross-country fault correctly. The paper describes the principle of operation and results of laboratory tests of this relay

  16. Role of N-S strike-slip faulting in structuring of north-eastern Tunisia; geodynamic implications

    Science.gov (United States)

    Arfaoui, Aymen; Soumaya, Abdelkader; Ben Ayed, Noureddine; Delvaux, Damien; Ghanmi, Mohamed; Kadri, Ali; Zargouni, Fouad

    2017-05-01

    Three major compressional events characterized by folding, thrusting and strike-slip faulting occurred in the Eocene, Late Miocene and Quaternary along the NE Tunisian domain between Bou Kornine-Ressas-Msella and Cap Bon Peninsula. During the Plio-Quaternary, the Grombalia and Mornag grabens show a maximum of collapse in parallelism with the NNW-SSE SHmax direction and developed as 3rd order distensives zones within a global compressional regime. Using existing tectonic and geophysical data supplemented by new fault-kinematic observations, we show that Cenozoic deformation of the Mesozoic sedimentary sequences is dominated by first order N-S faults reactivation, this sinistral wrench system is responsible for the formation of strike-slip duplexes, thrusts, folds and grabens. Following our new structural interpretation, the major faults of N-S Axis, Bou Kornine-Ressas-Messella (MRB) and Hammamet-Korbous (HK) form an N-S first order compressive relay within a left lateral strike-slip duplex. The N-S master MRB fault is dominated by contractional imbricate fans, while the parallel HK fault is characterized by a trailing of extensional imbricate fans. The Eocene and Miocene compression phases in the study area caused sinistral strike-slip reactivation of pre-existing N-S faults, reverse reactivation of NE-SW trending faults and normal-oblique reactivation of NW-SE faults, creating a NE-SW to N-S trending system of east-verging folds and overlaps. Existing seismic tomography images suggest a key role for the lithospheric subvertical tear or STEP fault (Slab Transfer Edge Propagator) evidenced below this region on the development of the MRB and the HK relay zone. The presence of extensive syntectonic Pliocene on top of this crustal scale fault may be the result of a recent lithospheric vertical kinematic of this STEP fault, due to the rollback and lateral migration of the Calabrian slab eastward.

  17. Vibration characteristics of a hydraulic generator unit rotor system with parallel misalignment and rub-impact

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhiwei; Zhou, Jianzhong; Yang, Mengqi; Zhang, Yongchuan [Huazhong University of Science and Technology, College of Hydraulic and Digitalization Engineering, Wuhan, Hubei Province (China)

    2011-07-15

    The object of this research aims at the hydraulic generator unit rotor system. According to fault problems of the generator rotor local rubbing caused by the parallel misalignment and mass eccentricity, a dynamic model for the rotor system coupled with misalignment and rub-impact is established. The dynamic behaviors of this system are investigated using numerical integral method, as the parallel misalignment, mass eccentricity and bearing stiffness vary. The nonlinear dynamic responses of the generator rotor and turbine rotor with coupling faults are analyzed by means of bifurcation diagrams, Poincare maps, axis orbits, time histories and amplitude spectrum diagrams. Various nonlinear phenomena in the system, such as periodic, three-periodic and quasi-periodic motions, are studied with the change of the parallel misalignment. The results reveal that vibration characteristics of the rotor system with coupling faults are extremely complex and there are some low frequencies with large amplitude in the 0.3-0.4 x components. As the increase in mass eccentricity, the interval of nonperiodic motions will be continuously moved forward. It suggests that the reduction in mass eccentricity or increase in bearing stiffness could preclude nonlinear vibration. These might provide some important theory references for safety operating and exact identification of the faults in rotating machinery. (orig.)

  18. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  19. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  20. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  1. The accommodation of relative motion at depth on the San Andreas fault system in California

    Science.gov (United States)

    Prescott, W. H.; Nur, A.

    1981-01-01

    Plate motion below the seismogenic layer along the San Andreas fault system in California is assumed to form by aseismic slip along a deeper extension of the fault or may result from lateral distribution of deformation below the seismogenic layer. The shallow depth of California earthquakes, the depth of the coseismic slip during the 1906 San Francisco earthquake, and the presence of widely separated parallel faults indicate that relative motion is distributed below the seismogenic zone, occurring by inelastic flow rather than by aseismic slip on discrete fault planes.

  2. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  3. Fault and joint geometry at Raft River Geothermal Area, Idaho

    Science.gov (United States)

    Guth, L. R.; Bruhn, R. L.; Beck, S. L.

    1981-07-01

    Raft River geothermal reservoir is formed by fractures in sedimentary strata of the Miocene and Pliocene salt lake formation. The fracturing is most intense at the base of the salt lake formation, along a decollement that dips eastward at less than 50 on top of metamorphosed precambrian and lower paleozoic rocks. Core taken from less than 200 m above the decollement contains two sets of normal faults. The major set of faults dips between 500 and 700. These faults occur as conjugate pairs that are bisected by vertical extension fractures. The second set of faults dips 100 to 200 and may parallel part of the basal decollement or reflect the presence of listric normal faults in the upper plate. Surface joints form two suborthogonal sets that dip vertically. East-northeast-striking joints are most frequent on the limbs of the Jim Sage anticline, a large fold that is associated with the geothermal field.

  4. Dependability evaluation of computing systems - physical faults, design faults, malicious faults

    International Nuclear Information System (INIS)

    Kaaniche, Mohamed

    1999-01-01

    The research summarized in this report focuses on the dependability of computer systems. It addresses several complementary, theoretical as well as experimental, issues that are grouped into four topics. The first topic concerns the definition of efficient methods that aim to assist the users in the construction and validation of complex dependability analysis and evaluation models. The second topic deals with the modeling of reliability and availability growth that mainly result from the progressive removal of design faults. A method is also defined to support the application of software reliability evaluation studies in an industrial context. The third topic deals with the development and experimentation of a new approach for the quantitative evaluation of operational security. This approach aims to assist the system administrators in the monitoring of operational security, when modifications, that are likely to introduce new vulnerabilities, occur in the system configuration, the applications, the user behavior, etc. Finally, the fourth topic addresses: a) the definition of a development model focused at the production of dependable systems, and b) the development of assessment criteria to obtain justified confidence that a system will achieve, during its operation and up to its decommissioning, its dependability objectives. (author) [fr

  5. QR-decomposition based SENSE reconstruction using parallel architecture.

    Science.gov (United States)

    Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad

    2018-04-01

    Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Tsunamis effects at coastal sites due to offshore faulting

    International Nuclear Information System (INIS)

    Miloh, T.; Striem, H.L.

    1976-07-01

    Unusual waves (tsunamis) triggered by submarine tectonic activity such as a fault displacement in the sea bottom may have considerable effect on a coastal site. The possiblity of such phenomena to occur at the southern coast of Israel due to a series of shore-parallel faults, about twenty kilometers offshore, is examined in this paper. The analysis relates the energy or the momentum imparted to the body of water due to a fault displacement of the sea bottom to the energy or the momentum of he water waves thus created. The faults off the Ashdod coast may cause surface waves with amplitudes of about five metres and periods of about one third of an hour. It is also considered that because of the downward movement of the faulted blocks a recession of the sea level rather than a flooding would be the first and the predominant effect at the shore, and this is in agreement with some historical reports. The analysis here presented might be of interest to those designing coastal power plants. (author)

  7. Factors for simultaneous rupture assessment of active fault. Part 1. Fault geometry and slip-distribution based on tectonic geomorphological and paleoseismological investigations

    International Nuclear Information System (INIS)

    Sasaki, Toshinori; Ueta, Keiichi

    2012-01-01

    It is important to evaluate the magnitude of an earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults is often decided on the basis of geometric distances except for the cases in which paleoseismic records of these faults are well known. We have been studying the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the techniques based on the paleoseismic record and the geometric distance. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along these faults in dense vegetation area. We have found several new outcrops in this area where the surface ruptures of the 1891 Nobi earthquake have not been known. At the several outcrops, humic layer whose age is from 14th century to 19th century by 14C age dating was deformed by the active fault. We conclude that the surface rupture of Nukumi fault in the 1891 Nobi earthquake is continuous to 12km southeast of Nukumi village. In other words, these findings indicate that there is 10-12km parallel overlap zone between the surface rupture of the southeastern end of Nukumi fault and the northwestern end of Neodani fault. (author)

  8. Index for simultaneous rupture assessment of active faults. Part 3. Subsurface structure deduced from geophysical research

    International Nuclear Information System (INIS)

    Aoyagi, Yasuhira

    2012-01-01

    Tomographic inversion was carried out in the northern source region of the 1891 Nobi earthquake, the largest inland earthquake (M8.0) in Japan to detect subsurface structure to control simultaneous rupture of active fault system. In the step-over between the two ruptured fault segments in 1891, a remarkable low velocity zone is found between the Nukumi and Ibigawa faults at the depth shallower than 3-5 km. The low velocity zone forms a prism-like body narrowing down in the deeper. Hypocenters below the low velocity zone connecting the two ruptured segments indicate the possibility of their convergence in the seismogenic zone. Northern tip of the Neodani fault locates in the low velocity zone. The results show that fault rupture is easy to propagate in the low velocity zone between two parallel faults. In contrast an E-W cross-structure is found in the seismogenic depth between the Nobi earthquake and the 1948 Fukui earthquake (M7.1) source regions. It runs parallel to the Hida gaien belt, a major geologic structure in the district. P-wave velocity is lower and the hypocenter depths are obviously shallower in north of the cross-structure. Since a few faults lie in E-W direction just above it, a cross-structure zone including the Hida gaien belt might terminate the fault rupture. The results indicate fault rupture is difficult to propagate beyond major cross-structure. The length ratio of cross-structure to fault segment (PL/FL) is proposed to use for simultaneous rupture assessment. Some examples show that fault ruptures perhaps (PL/FL>3-4), maybe (∼1), and probably (<1) cut through such cross-structures. (author)

  9. A Fault Oblivious Extreme-Scale Execution Environment

    Energy Technology Data Exchange (ETDEWEB)

    McKie, Jim

    2014-11-20

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations

  10. Biometrics based key management of double random phase encoding scheme using error control codes

    Science.gov (United States)

    Saini, Nirmala; Sinha, Aloka

    2013-08-01

    In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.

  11. Development Ground Fault Detecting System for D.C Voltage Line

    Energy Technology Data Exchange (ETDEWEB)

    Kim Taek Soo; Song Ung Il; Gwon, Young Dong; Lee Hyoung Kee [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1996-12-31

    It is necessary to keep the security of reliability and to maximize the efficiency of maintenance by prompt detection of a D.C feeder ground fault point at the built ed or a building power plants. At present, the most of the power plants are set up the ground fault indicator lamp in the monitor room. If a ground fault occurs on DC voltage feeder, a current through the ground fault relay is adjusted and the lamps have brightened while the current flows the relay coil. In order to develop such a system, it is analyzed a D.C feeder ground circuit theoretically and studied a principles which can determine ground fault point or a polarity discrimination and a phase discrimination of the line. So, the developed system through this principles can compute a resistance ground fault current and a capacitive ground fault current. It shows that the system can defect a ground fault point or a bad insulated line by measuring a power plant D.C feeder insulation resistance at the un interruptible power status, and therefore the power plant could protect an unexpected service interruption . (author). 18 refs., figs.

  12. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    Science.gov (United States)

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  13. Novel encoding and updating of positional, or directional, spatial cues are processed by distinct hippocampal subfields: Evidence for parallel information processing and the "what" stream.

    Science.gov (United States)

    Hoang, Thu-Huong; Aliane, Verena; Manahan-Vaughan, Denise

    2018-05-01

    The specific roles of hippocampal subfields in spatial information processing and encoding are, as yet, unclear. The parallel map theory postulates that whereas the CA1 processes discrete environmental features (positional cues used to generate a "sketch map"), the dentate gyrus (DG) processes large navigation-relevant landmarks (directional cues used to generate a "bearing map"). Additionally, the two-streams hypothesis suggests that hippocampal subfields engage in differentiated processing of information from the "where" and the "what" streams. We investigated these hypotheses by analyzing the effect of exploration of discrete "positional" features and large "directional" spatial landmarks on hippocampal neuronal activity in rats. As an indicator of neuronal activity we measured the mRNA induction of the immediate early genes (IEGs), Arc and Homer1a. We observed an increase of this IEG mRNA in CA1 neurons of the distal neuronal compartment and in proximal CA3, after novel spatial exploration of discrete positional cues, whereas novel exploration of directional cues led to increases in IEG mRNA in the lower blade of the DG and in proximal CA3. Strikingly, the CA1 did not respond to directional cues and the DG did not respond to positional cues. Our data provide evidence for both the parallel map theory and the two-streams hypothesis and suggest a precise compartmentalization of the encoding and processing of "what" and "where" information occurs within the hippocampal subfields. © 2018 The Authors. Hippocampus Published by Wiley Periodicals, Inc.

  14. Encoding qubits into oscillators with atomic ensembles and squeezed light

    Science.gov (United States)

    Motes, Keith R.; Baragiola, Ben Q.; Gilchrist, Alexei; Menicucci, Nicolas C.

    2017-05-01

    The Gottesman-Kitaev-Preskill (GKP) encoding of a qubit within an oscillator provides a number of advantages when used in a fault-tolerant architecture for quantum computing, most notably that Gaussian operations suffice to implement all single- and two-qubit Clifford gates. The main drawback of the encoding is that the logical states themselves are challenging to produce. Here we present a method for generating optical GKP-encoded qubits by coupling an atomic ensemble to a squeezed state of light. Particular outcomes of a subsequent spin measurement of the ensemble herald successful generation of the resource state in the optical mode. We analyze the method in terms of the resources required (total spin and amount of squeezing) and the probability of success. We propose a physical implementation using a Faraday-based quantum nondemolition interaction.

  15. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  16. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    Science.gov (United States)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  17. Secure it now or secure it later: the benefits of addressing cyber-security from the outset

    Science.gov (United States)

    Olama, Mohammed M.; Nutaro, James

    2013-05-01

    The majority of funding for research and development (R&D) in cyber-security is focused on the end of the software lifecycle where systems have been deployed or are nearing deployment. Recruiting of cyber-security personnel is similarly focused on end-of-life expertise. By emphasizing cyber-security at these late stages, security problems are found and corrected when it is most expensive to do so, thus increasing the cost of owning and operating complex software systems. Worse, expenditures on expensive security measures often mean less money for innovative developments. These unwanted increases in cost and potential slowing of innovation are unavoidable consequences of an approach to security that finds and remediate faults after software has been implemented. We argue that software security can be improved and the total cost of a software system can be substantially reduced by an appropriate allocation of resources to the early stages of a software project. By adopting a similar allocation of R&D funds to the early stages of the software lifecycle, we propose that the costs of cyber-security can be better controlled and, consequently, the positive effects of this R&D on industry will be much more pronounced.

  18. Latency Performance of Encoding with Random Linear Network Coding

    DEFF Research Database (Denmark)

    Nielsen, Lars; Hansen, René Rydhof; Lucani Rötter, Daniel Enrique

    2018-01-01

    the encoding process can be parallelized based on system requirements to reduce data access time within the system. Using a counting argument, we focus on predicting the effect of changes of generation (number of original packets) and symbol size (number of bytes per data packet) configurations on the encoding...... latency on full vector and on-the-fly algorithms. We show that the encoding latency doubles when either the generation size or the symbol size double and confirm this via extensive simulations. Although we show that the theoretical speed gain of on-the-fly over full vector is two, our measurements show...

  19. Optical Encoding Technology for Viral Screening Panels Final Report CRADA No TC02132.0

    Energy Technology Data Exchange (ETDEWEB)

    Lenhoff, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Haushalter, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-15

    This was a collaborative effort between Lawrence Livermore National Security, LLC, Lawrence Livermore National Laboratory (LLNL) and Parallel Synthesis Technologies, Inc. (PSTI), to develop Optical Encoding Technology for Viral Screening Panels. The goal for this effort was to prepare a portable bead reader system that would enable the development of viral and bacterial screening panels which could be used for the detection of any desired set of bacteria or viruses in any location. The main objective was to determine if the combination of a bead-based, PCR suspension array technology, formulated from Parallume encoded beads and PSTI’s multiplex assay reader system (MARS), could provide advantages in terms of the number of simultaneously measured samples, portability, ruggedness, ease of use, accuracy, precision or cost as compared to the Luminexbased system developed at LLNL. The project underwent several no cost extensions however the overall goal of demonstrating the utility of this new system was achieved. As a result of the project a significant change to the type of bead PSTI used for the suspension system was implemented allowing better performance than the commercial Luminex system.

  20. Using the GeoFEST Faulted Region Simulation System

    Science.gov (United States)

    Parker, Jay W.; Lyzenga, Gregory A.; Donnellan, Andrea; Judd, Michele A.; Norton, Charles D.; Baker, Teresa; Tisdale, Edwin R.; Li, Peggy

    2004-01-01

    GeoFEST (the Geophysical Finite Element Simulation Tool) simulates stress evolution, fault slip and plastic/elastic processes in realistic materials, and so is suitable for earthquake cycle studies in regions such as Southern California. Many new capabilities and means of access for GeoFEST are now supported. New abilities include MPI-based cluster parallel computing using automatic PYRAMID/Parmetis-based mesh partitioning, automatic mesh generation for layered media with rectangular faults, and results visualization that is integrated with remote sensing data. The parallel GeoFEST application has been successfully run on over a half-dozen computers, including Intel Xeon clusters, Itanium II and Altix machines, and the Apple G5 cluster. It is not separately optimized for different machines, but relies on good domain partitioning for load-balance and low communication, and careful writing of the parallel diagonally preconditioned conjugate gradient solver to keep communication overhead low. Demonstrated thousand-step solutions for over a million finite elements on 64 processors require under three hours, and scaling tests show high efficiency when using more than (order of) 4000 elements per processor. The source code and documentation for GeoFEST is available at no cost from Open Channel Foundation. In addition GeoFEST may be used through a browser-based portal environment available to approved users. That environment includes semi-automated geometry creation and mesh generation tools, GeoFEST, and RIVA-based visualization tools that include the ability to generate a flyover animation showing deformations and topography. Work is in progress to support simulation of a region with several faults using 16 million elements, using a strain energy metric to adapt the mesh to faithfully represent the solution in a region of widely varying strain.

  1. Fault Diagnosis of Motor Bearing by Analyzing a Video Clip

    Directory of Open Access Journals (Sweden)

    Siliang Lu

    2016-01-01

    Full Text Available Conventional bearing fault diagnosis methods require specialized instruments to acquire signals that can reflect the health condition of the bearing. For instance, an accelerometer is used to acquire vibration signals, whereas an encoder is used to measure motor shaft speed. This study proposes a new method for simplifying the instruments for motor bearing fault diagnosis. Specifically, a video clip recording of a running bearing system is captured using a cellphone that is equipped with a camera and a microphone. The recorded video is subsequently analyzed to obtain the instantaneous frequency of rotation (IFR. The instantaneous fault characteristic frequency (IFCF of the defective bearing is obtained by analyzing the sound signal that is recorded by the microphone. The fault characteristic order is calculated by dividing IFCF by IFR to identify the fault type of the bearing. The effectiveness and robustness of the proposed method are verified by a series of experiments. This study provides a simple, flexible, and effective solution for motor bearing fault diagnosis. Given that the signals are gathered using an affordable and accessible cellphone, the proposed method is proven suitable for diagnosing the health conditions of bearing systems that are located in remote areas where specialized instruments are unavailable or limited.

  2. Detecting and diagnosing SSME faults using an autoassociative neural network topology

    Science.gov (United States)

    Ali, M.; Dietz, W. E.; Kiech, E. L.

    1989-01-01

    An effort is underway at the University of Tennessee Space Institute to develop diagnostic expert system methodologies based on the analysis of patterns of behavior of physical mechanisms. In this approach, fault diagnosis is conceptualized as the mapping or association of patterns of sensor data to patterns representing fault conditions. Neural networks are being investigated as a means of storing and retrieving fault scenarios. Neural networks offer several powerful features in fault diagnosis, including (1) general pattern matching capabilities, (2) resistance to noisy input data, (3) the ability to be trained by example, and (4) the potential for implementation on parallel computer architectures. This paper presents (1) an autoassociative neural network topology, i.e. the network input and output is identical when properly trained, and hence learning is unsupervised; (2) the training regimen used; and (3) the response of the system to inputs representing both previously observed and unkown fault scenarios. The effects of noise on the integrity of the diagnosis are also evaluated.

  3. Security Situation Assessment of All-Optical Network Based on Evidential Reasoning Rule

    Directory of Open Access Journals (Sweden)

    Zhong-Nan Zhao

    2016-01-01

    Full Text Available It is important to determine the security situations of the all-optical network (AON, which is more vulnerable to hacker attacks and faults than other networks in some cases. A new approach of the security situation assessment to the all-optical network is developed in this paper. In the new assessment approach, the evidential reasoning (ER rule is used to integrate various evidences of the security factors including the optical faults and the special attacks in the AON. Furthermore, a new quantification method of the security situation is also proposed. A case study of an all-optical network is conducted to demonstrate the effectiveness and the practicability of the new proposed approach.

  4. Paleomagnetic and structural evidence for oblique slip in a fault-related fold, Grayback monocline, Colorado

    Science.gov (United States)

    Tetreault, J.; Jones, C.H.; Erslev, E.; Larson, S.; Hudson, M.; Holdaway, S.

    2008-01-01

    Significant fold-axis-parallel slip is accommodated in the folded strata of the Grayback monocline, northeastern Front Range, Colorado, without visible large strike-slip displacement on the fold surface. In many cases, oblique-slip deformation is partitioned; fold-axis-normal slip is accommodated within folds, and fold-axis-parallel slip is resolved onto adjacent strike-slip faults. Unlike partitioning strike-parallel slip onto adjacent strike-slip faults, fold-axis-parallel slip has deformed the forelimb of the Grayback monocline. Mean compressive paleostress orientations in the forelimb are deflected 15??-37?? clockwise from the regional paleostress orientation of the northeastern Front Range. Paleomagnetic directions from the Permian Ingleside Formation in the forelimb are rotated 16??-42?? clockwise about a bedding-normal axis relative to the North American Permian reference direction. The paleostress and paleomagnetic rotations increase with the bedding dip angle and decrease along strike toward the fold tip. These measurements allow for 50-120 m of fold-axis-parallel slip within the forelimb, depending on the kinematics of strike-slip shear. This resolved horizontal slip is nearly equal in magnitude to the ???180 m vertical throw across the fold. For 200 m of oblique-slip displacement (120 m of strike slip and 180 m of reverse slip), the true shortening direction across the fold is N90??E, indistinguishable from the regionally inferred direction of N90??E and quite different from the S53??E fold-normal direction. Recognition of this deformational style means that significant amounts of strike slip can be accommodated within folds without axis-parallel surficial faulting. ?? 2008 Geological Society of America.

  5. Three-Dimensional Growth of Flexural Slip Fault-Bend and Fault-Propagation Folds and Their Geomorphic Expression

    Directory of Open Access Journals (Sweden)

    Asdrúbal Bernal

    2018-03-01

    latter mode of fold growth may be more common. The advective component of deformation (implicit in kink-band migration models of fault-bend and fault-propagation folding exerts a strong control on drainage basin development. In particular, as drainage lengthens with fold growth, more linear, parallel drainage networks are developed as compared to the dendritic patterns developed above simple uplifting structures. Over the 1 Ma of their development the folds modelled here only attain partial topographic equilibrium, as new material is continually being advected through active axial surfaces on both fold limbs and faults are propagating in both the transport and strike directions. We also find that the position of drainage divides at the Earth’s surface has a complex relationship to the underlying fold axial surface locations.

  6. Fault zone architecture of a major oblique-slip fault in the Rawil depression, Western Helvetic nappes, Switzerland

    Science.gov (United States)

    Gasser, D.; Mancktelow, N. S.

    2009-04-01

    The Helvetic nappes in the Swiss Alps form a classic fold-and-thrust belt related to overall NNW-directed transport. In western Switzerland, the plunge of nappe fold axes and the regional distribution of units define a broad depression, the Rawil depression, between the culminations of Aiguilles Rouge massif to the SW and Aar massif to the NE. A compilation of data from the literature establishes that, in addition to thrusts related to nappe stacking, the Rawil depression is cross-cut by four sets of brittle faults: (1) SW-NE striking normal faults that strike parallel to the regional fold axis trend, (2) NW-SE striking normal faults and joints that strike perpendicular to the regional fold axis trend, and (3) WNW-ESE striking normal plus dextral oblique-slip faults as well as (4) WSW-ENE striking normal plus dextral oblique-slip faults that both strike oblique to the regional fold axis trend. We studied in detail a beautifully exposed fault from set 3, the Rezli fault zone (RFZ) in the central Wildhorn nappe. The RFZ is a shallow to moderately-dipping (ca. 30-60˚) fault zone with an oblique-slip displacement vector, combining both dextral and normal components. It must have formed in approximately this orientation, because the local orientation of fold axes corresponds to the regional one, as does the generally vertical orientation of extensional joints and veins associated with the regional fault set 2. The fault zone crosscuts four different lithologies: limestone, intercalated marl and limestone, marl and sandstone, and it has a maximum horizontal dextral offset component of ~300 m and a maximum vertical normal offset component of ~200 m. Its internal architecture strongly depends on the lithology in which it developed. In the limestone, it consists of veins, stylolites, cataclasites and cemented gouge, in the intercalated marls and limestones of anastomosing shear zones, brittle fractures, veins and folds, in the marls of anastomosing shear zones, pressure

  7. A New Method of Improving Transformer Restricted Earth Fault Protection

    Directory of Open Access Journals (Sweden)

    KRSTIVOJEVIC, J. P.

    2014-08-01

    Full Text Available A new method of avoiding malfunctioning of the transformer restricted earth fault (REF protection is presented. Application of the proposed method would eliminate unnecessary operation of REF protection in the cases of faults outside protected zone of a transformer or a magnetizing inrush accompanied by current transformer (CT saturation. On the basis of laboratory measurements and simulations the paper presents a detailed performance assessment of the proposed method which is based on digital phase comparator. The obtained results show that the new method was stable and precise for all tested faults and that its application would allow making a clear and precise difference between an internal fault and: (i external fault or (ii magnetizing inrush. The proposed method would improve performance of REF protection and reduce probability of maloperation due to CT saturation. The new method is robust and characterized by high speed of operation and high reliability and security.

  8. Which Fault Orientations Occur during Oblique Rifting? Combining Analog and Numerical 3d Models with Observations from the Gulf of Aden

    Science.gov (United States)

    Autin, J.; Brune, S.

    2013-12-01

    Oblique rift systems like the Gulf of Aden are intrinsically three-dimensional. In order to understand the evolution of these systems, one has to decode the fundamental mechanical similarities of oblique rifts. One way to accomplish this, is to strip away the complexity that is generated by inherited fault structures. In doing so, we assume a laterally homogeneous segment of Earth's lithosphere and ask how many different fault populations are generated during oblique extension inbetween initial deformation and final break-up. We combine results of an analog and a numerical model that feature a 3D segment of a layered lithosphere. In both cases, rift evolution is recorded quantitatively in terms of crustal fault geometries. For the numerical model, we adopt a novel post-processing method that allows to infer small-scale crustal fault orientation from the surface stress tensor. Both models involve an angle of 40 degrees between the rift normal and the extensional direction which allows comparison to the Gulf of Aden rift system. The resulting spatio-temporal fault pattern of our models shows three normal fault orientations: rift-parallel, extension-orthogonal, and intermediate, i.e. with a direction inbetween the two previous orientations. The rift evolution involves three distinct phases: (i) During the initial rift phase, wide-spread faulting with intermediate orientation occurs. (ii) Advanced lithospheric necking enables rift-parallel normal faulting at the rift flanks, while strike-slip faulting in the central part of the rift system indicates strain partitioning. (iii) During continental break-up, displacement-orthogonal as well as intermediate faults occur. We compare our results to the structural evolution of the Eastern Gulf of Aden. External parts of the rift exhibit intermediate and displacement-orthogonal faults while rift-parallel faults are present at the rift borders. The ocean-continent transition mainly features intermediate and displacement

  9. Fethiye-Burdur Fault Zone (SW Turkey): a myth?

    Science.gov (United States)

    Kaymakci, Nuretdin; Langereis, Cornelis; Özkaptan, Murat; Özacar, Arda A.; Gülyüz, Erhan; Uzel, Bora; Sözbilir, Hasan

    2017-04-01

    Fethiye Burdur Fault Zone (FBFZ) is first proposed by Dumont et al. (1979) as a sinistral strike-slip fault zone as the NE continuation of Pliny-Strabo trench in to the Anatolian Block. The fault zone supposed to accommodate at least 100 km sinistral displacement between the Menderes Massif and the Beydaǧları platform during the exhumation of the Menderes Massif, mainly during the late Miocene. Based on GPS velocities Barka and Reilinger (1997) proposed that the fault zone is still active and accommodates sinistral displacement. In order to test the presence and to unravel its kinematics we have conducted a rigorous paleomagnetic study containing more than 3000 paleomagnetic samples collected from 88 locations and 11700 fault slip data collected from 198 locations distributed evenly all over SW Anatolia spanning from Middle Miocene to Late Pliocene. The obtained rotation senses and amounts indicate slight (around 20°) counter-clockwise rotations distributed uniformly almost whole SW Anatolia and there is no change in the rotation senses and amounts on either side of the FBFZ implying no differential rotation within the zone. Additionally, the slickenside pitches and constructed paleostress configurations, along the so called FBFZ and also within the 300 km diameter of the proposed fault zone, indicated that almost all the faults, oriented parallel to subparallel to the zone, are normal in character. The fault slip measurements are also consistent with earthquake focal mechanisms suggesting active extension in the region. We have not encountered any significant strike-slip motion in the region to support presence and transcurrent nature of the FBFZ. On the contrary, the region is dominated by extensional deformation and strike-slip components are observed only on the NW-SE striking faults which are transfer faults that accommodated extension and normal motion. Therefore, we claim that the sinistral Fethiye Burdur Fault (Zone) is a myth and there is no tangible

  10. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    Science.gov (United States)

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Parallel optoelectronic trinary signed-digit division

    Science.gov (United States)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  12. Base drive for paralleled inverter systems

    Science.gov (United States)

    Nagano, S. (Inventor)

    1980-01-01

    In a paralleled inverter system, a positive feedback current derived from the total current from all of the modules of the inverter system is applied to the base drive of each of the power transistors of all modules, thereby to provide all modules protection against open or short circuit faults occurring in any of the modules, and force equal current sharing among the modules during turn on of the power transistors.

  13. A Hardware Accelerator for Fault Simulation Utilizing a Reconfigurable Array Architecture

    Directory of Open Access Journals (Sweden)

    Sungho Kang

    1996-01-01

    Full Text Available In order to reduce cost and to achieve high speed a new hardware accelerator for fault simulation has been designed. The architecture of the new accelerator is based on a reconfigurabl mesh type processing element (PE array. Circuit elements at the same topological level are simulated concurrently, as in a pipelined process. A new parallel simulation algorithm expands all of the gates to two input gates in order to limit the number of faults to two at each gate, so that the faults can be distributed uniformly throughout the PE array. The PE array reconfiguration operation provides a simulation speed advantage by maximizing the use of each PE cell.

  14. Data-driven fault mechanics: Inferring fault hydro-mechanical properties from in situ observations of injection-induced aseismic slip

    Science.gov (United States)

    Bhattacharya, P.; Viesca, R. C.

    2017-12-01

    In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent

  15. Seismotectonics and fault structure of the California Central Coast

    Science.gov (United States)

    Hardebeck, Jeanne L.

    2010-01-01

    I present and interpret new earthquake relocations and focal mechanisms for the California Central Coast. The relocations improve upon catalog locations by using 3D seismic velocity models to account for lateral variations in structure and by using relative arrival times from waveform cross-correlation and double-difference methods to image seismicity features more sharply. Focal mechanisms are computed using ray tracing in the 3D velocity models. Seismicity alignments on the Hosgri fault confirm that it is vertical down to at least 12 km depth, and the focal mechanisms are consistent with right-lateral strike-slip motion on a vertical fault. A prominent, newly observed feature is an ~25 km long linear trend of seismicity running just offshore and parallel to the coastline in the region of Point Buchon, informally named the Shoreline fault. This seismicity trend is accompanied by a linear magnetic anomaly, and both the seismicity and the magnetic anomaly end where they obliquely meet the Hosgri fault. Focal mechanisms indicate that the Shoreline fault is a vertical strike-slip fault. Several seismicity lineations with vertical strike-slip mechanisms are observed in Estero Bay. Events greater than about 10 km depth in Estero Bay, however, exhibit reverse-faulting mechanisms, perhaps reflecting slip at the top of the remnant subducted slab. Strike-slip mechanisms are observed offshore along the Hosgri–San Simeon fault system and onshore along the West Huasna and Rinconada faults, while reverse mechanisms are generally confined to the region between these two systems. This suggests a model in which the reverse faulting is primarily due to restraining left-transfer of right-lateral slip.

  16. 3D Constraints On Fault Architecture and Strain Distribution of the Newport-Inglewood Rose Canyon and San Onofre Trend Fault Systems

    Science.gov (United States)

    Holmes, J. J.; Driscoll, N. W.; Kent, G. M.

    2017-12-01

    The Inner California Borderlands (ICB) is situated off the coast of southern California and northern Baja. The structural and geomorphic characteristics of the area record a middle Oligocene transition from subduction to microplate capture along the California coast. Marine stratigraphic evidence shows large-scale extension and rotation overprinted by modern strike-slip deformation. Geodetic and geologic observations indicate that approximately 6-8 mm/yr of Pacific-North American relative plate motion is accommodated by offshore strike-slip faulting in the ICB. The farthest inshore fault system, the Newport-Inglewood Rose Canyon (NIRC) Fault is a dextral strike-slip system that is primarily offshore for approximately 120 km from San Diego to the San Joaquin Hills near Newport Beach, California. Based on trenching and well data, the NIRC Fault Holocene slip rate is 1.5-2.0 mm/yr to the south and 0.5-1.0 mm/yr along its northern extent. An earthquake rupturing the entire length of the system could produce an Mw 7.0 earthquake or larger. West of the main segments of the NIRC Fault is the San Onofre Trend (SOT) along the continental slope. Previous work concluded that this is part of a strike-slip system that eventually merges with the NIRC Fault. Others have interpreted this system as deformation associated with the Oceanside Blind Thrust Fault purported to underlie most of the region. In late 2013, we acquired the first high-resolution 3D Parallel Cable (P-Cable) seismic surveys of the NIRC and SOT faults as part of the Southern California Regional Fault Mapping project. Analysis of stratigraphy and 3D mapping of this new data has yielded a new kinematic fault model of the area that provides new insight on deformation caused by interactions in both compressional and extensional regimes. For the first time, we can reconstruct fault interaction and investigate how strain is distributed through time along a typical strike-slip margin using 3D constraints on fault

  17. Noise and neuronal populations conspire to encode simple waveforms reliably

    Science.gov (United States)

    Parnas, B. R.

    1996-01-01

    Sensory systems rely on populations of neurons to encode information transduced at the periphery into meaningful patterns of neuronal population activity. This transduction occurs in the presence of intrinsic neuronal noise. This is fortunate. The presence of noise allows more reliable encoding of the temporal structure present in the stimulus than would be possible in a noise-free environment. Simulations with a parallel model of signal processing at the auditory periphery have been used to explore the effects of noise and a neuronal population on the encoding of signal information. The results show that, for a given set of neuronal modeling parameters and stimulus amplitude, there is an optimal amount of noise for stimulus encoding with maximum fidelity.

  18. Controls on fault zone structure and brittle fracturing in the foliated hanging wall of the Alpine Fault

    Science.gov (United States)

    Williams, Jack N.; Toy, Virginia G.; Massiot, Cécile; McNamara, David D.; Smith, Steven A. F.; Mills, Steven

    2018-04-01

    Three datasets are used to quantify fracture density, orientation, and fill in the foliated hanging wall of the Alpine Fault: (1) X-ray computed tomography (CT) images of drill core collected within 25 m of its principal slip zones (PSZs) during the first phase of the Deep Fault Drilling Project that were reoriented with respect to borehole televiewer images, (2) field measurements from creek sections up to 500 m from the PSZs, and (3) CT images of oriented drill core collected during the Amethyst Hydro Project at distances of ˜ 0.7-2 km from the PSZs. Results show that within 160 m of the PSZs in foliated cataclasites and ultramylonites, gouge-filled fractures exhibit a wide range of orientations. At these distances, fractures are interpreted to have formed at relatively high confining pressures and/or in rocks that had a weak mechanical anisotropy. Conversely, at distances greater than 160 m from the PSZs, fractures are typically open and subparallel to the mylonitic or schistose foliation, implying that fracturing occurred at low confining pressures and/or in rocks that were mechanically anisotropic. Fracture density is similar across the ˜ 500 m width of the field transects. By combining our datasets with measurements of permeability and seismic velocity around the Alpine Fault, we further develop the hierarchical model for hanging-wall damage structure that was proposed by Townend et al. (2017). The wider zone of foliation-parallel fractures represents an outer damage zone that forms at shallow depths. The distinct inner damage zone. This zone is interpreted to extend towards the base of the seismogenic crust given that its width is comparable to (1) the Alpine Fault low-velocity zone detected by fault zone guided waves and (2) damage zones reported from other exhumed large-displacement faults. In summary, a narrow zone of fracturing at the base of the Alpine Fault's hanging-wall seismogenic crust is anticipated to widen at shallow depths, which is

  19. Geophysical Characterization of the Hilton Creek Fault System

    Science.gov (United States)

    Lacy, A. K.; Macy, K. P.; De Cristofaro, J. L.; Polet, J.

    2016-12-01

    The Long Valley Caldera straddles the eastern edge of the Sierra Nevada Batholith and the western edge of the Basin and Range Province, and represents one of the largest caldera complexes on Earth. The caldera is intersected by numerous fault systems, including the Hartley Springs Fault System, the Round Valley Fault System, the Long Valley Ring Fault System, and the Hilton Creek Fault System, which is our main region of interest. The Hilton Creek Fault System appears as a single NW-striking fault, dipping to the NE, from Davis Lake in the south to the southern rim of the Long Valley Caldera. Inside the caldera, it splays into numerous parallel faults that extend toward the resurgent dome. Seismicity in the area increased significantly in May 1980, following a series of large earthquakes in the vicinity of the caldera and a subsequent large earthquake swarm which has been suggested to be the result of magma migration. A large portion of the earthquake swarms in the Long Valley Caldera occurs on or around the Hilton Creek Fault splays. We are conducting an interdisciplinary geophysical study of the Hilton Creek Fault System from just south of the onset of splay faulting, to its extension into the dome of the caldera. Our investigation includes ground-based magnetic field measurements, high-resolution total station elevation profiles, Structure-From-Motion derived topography and an analysis of earthquake focal mechanisms and statistics. Preliminary analysis of topographic profiles, of approximately 1 km in length, reveals the presence of at least three distinct fault splays within the caldera with vertical offsets of 0.5 to 1.0 meters. More detailed topographic mapping is expected to highlight smaller structures. We are also generating maps of the variation in b-value along different portions of the Hilton Creek system to determine whether we can detect any transition to more swarm-like behavior towards the North. We will show maps of magnetic anomalies, topography

  20. Physical Fault Injection and Monitoring Methods for Programmable Devices

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00510096; Ferencei, Jozef

    A method of detecting faults for evaluating the fault cross section of any field programmable gate array (FPGA) was developed and is described in the thesis. The incidence of single event effects in FPGAs was studied for different probe particles (proton, neutron, gamma) using this method. The existing accelerator infrastructure of the Nuclear Physics Institute in Rez was supplemented by more sensitive beam monitoring system to ensure that the tests are done under well defined beam conditions. The bit cross section of single event effects was measured for different types of configuration memories, clock signal phase and beam energies and intensities. The extended infrastructure served also for radiation testing of components which are planned to be used in the new Inner Tracking System (ITS) detector of the ALICE experiment and for selecting optimal fault mitigation techniques used for securing the design of the FPGA-based ITS readout unit against faults induced by ionizing radiation.

  1. Model-based fault detection and isolation of a PWR nuclear power plant using neural networks

    International Nuclear Information System (INIS)

    Far, R.R.; Davilu, H.; Lucas, C.

    2008-01-01

    The proper and timely fault detection and isolation of industrial plant is of premier importance to guarantee the safe and reliable operation of industrial plants. The paper presents application of a neural networks-based scheme for fault detection and isolation, for the pressurizer of a PWR nuclear power plant. The scheme is constituted by 2 components: residual generation and fault isolation. The first component generates residuals via the discrepancy between measurements coming from the plant and a nominal model. The neutral network estimator is trained with healthy data collected from a full-scale simulator. For the second component detection thresholds are used to encode the residuals as bipolar vectors which represent fault patterns. These patterns are stored in an associative memory based on a recurrent neutral network. The proposed fault diagnosis tool is evaluated on-line via a full-scale simulator detected and isolate the main faults appearing in the pressurizer of a PWR. (orig.)

  2. A Combined Fault Diagnosis Method for Power Transformer in Big Data Environment

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2017-01-01

    Full Text Available The fault diagnosis method based on dissolved gas analysis (DGA is of great significance to detect the potential faults of the transformer and improve the security of the power system. The DGA data of transformer in smart grid have the characteristics of large quantity, multiple types, and low value density. In view of DGA big data’s characteristics, the paper first proposes a new combined fault diagnosis method for transformer, in which a variety of fault diagnosis models are used to make a preliminary diagnosis, and then the support vector machine is used to make the second diagnosis. The method adopts the intelligent complementary and blending thought, which overcomes the shortcomings of single diagnosis model in transformer fault diagnosis, and improves the diagnostic accuracy and the scope of application of the model. Then, the training and deployment strategy of the combined diagnosis model is designed based on Storm and Spark platform, which provides a solution for the transformer fault diagnosis in big data environment.

  3. Web-Services Development in Secure Way for Highload Systems

    Directory of Open Access Journals (Sweden)

    V. M. Nichiporchouk

    2011-12-01

    Full Text Available This paper describes approach to design of web-services in secure, high load and fault tolerant implementation for mass message processing. The multicomponent architecture of web-service with possibility for high security zone is provided as well as scalability evaluation of the architecture.

  4. V&V of Fault Management: Challenges and Successes

    Science.gov (United States)

    Fesq, Lorraine M.; Costello, Ken; Ohi, Don; Lu, Tiffany; Newhouse, Marilyn

    2013-01-01

    This paper describes the results of a special breakout session of the NASA Independent Verification and Validation (IV&V) Workshop held in the fall of 2012 entitled "V&V of Fault Management: Challenges and Successes." The NASA IV&V Program is in a unique position to interact with projects across all of the NASA development domains. Using this unique opportunity, the IV&V program convened a breakout session to enable IV&V teams to share their challenges and successes with respect to the V&V of Fault Management (FM) architectures and software. The presentations and discussions provided practical examples of pitfalls encountered while performing V&V of FM including the lack of consistent designs for implementing faults monitors and the fact that FM information is not centralized but scattered among many diverse project artifacts. The discussions also solidified the need for an early commitment to developing FM in parallel with the spacecraft systems as well as clearly defining FM terminology within a project.

  5. Superconducting fault current-limiter with variable shunt impedance

    Science.gov (United States)

    Llambes, Juan Carlos H; Xiong, Xuming

    2013-11-19

    A superconducting fault current-limiter is provided, including a superconducting element configured to resistively or inductively limit a fault current, and one or more variable-impedance shunts electrically coupled in parallel with the superconducting element. The variable-impedance shunt(s) is configured to present a first impedance during a superconducting state of the superconducting element and a second impedance during a normal resistive state of the superconducting element. The superconducting element transitions from the superconducting state to the normal resistive state responsive to the fault current, and responsive thereto, the variable-impedance shunt(s) transitions from the first to the second impedance. The second impedance of the variable-impedance shunt(s) is a lower impedance than the first impedance, which facilitates current flow through the variable-impedance shunt(s) during a recovery transition of the superconducting element from the normal resistive state to the superconducting state, and thus, facilitates recovery of the superconducting element under load.

  6. A circuit-based photovoltaic module simulator with shadow and fault settings

    Science.gov (United States)

    Chao, Kuei-Hsiang; Chao, Yuan-Wei; Chen, Jyun-Ping

    2016-03-01

    The main purpose of this study was to develop a photovoltaic (PV) module simulator. The proposed simulator, using electrical parameters from solar cells, could simulate output characteristics not only during normal operational conditions, but also during conditions of partial shadow and fault conditions. Such a simulator should possess the advantages of low cost, small size and being easily realizable. Experiments have shown that results from a proposed PV simulator of this kind are very close to that from simulation software during partial shadow conditions, and with negligible differences during fault occurrence. Meanwhile, the PV module simulator, as developed, could be used on various types of series-parallel connections to form PV arrays, to conduct experiments on partial shadow and fault events occurring in some of the modules. Such experiments are designed to explore the impact of shadow and fault conditions on the output characteristics of the system as a whole.

  7. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  8. Parallel imaging for first-pass myocardial perfusion

    NARCIS (Netherlands)

    Irwan, Roy; Lubbers, Daniel D.; van der Vleuten, Pieter A.; Kappert, Peter; Gotte, Marco J. W.; Sijens, Paul E.

    Two parallel imaging methods used for first-pass myocardial perfusion imaging were compared in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and image artifacts. One used adaptive Time-adaptive SENSitivity Encoding (TSENSE) and the other used GeneRalized Autocalibrating

  9. Phase-encoded measurement device independent quantum key distribution without a shared reference frame

    Science.gov (United States)

    Zhuo-Dan, Zhu; Shang-Hong, Zhao; Chen, Dong; Ying, Sun

    2018-07-01

    In this paper, a phase-encoded measurement device independent quantum key distribution (MDI-QKD) protocol without a shared reference frame is presented, which can generate secure keys between two parties while the quantum channel or interferometer introduces an unknown and slowly time-varying phase. The corresponding secret key rate and single photons bit error rate is analysed, respectively, with single photons source (SPS) and weak coherent source (WCS), taking finite-key analysis into account. The numerical simulations show that the modified phase-encoded MDI-QKD protocol has apparent superiority both in maximal secure transmission distance and key generation rate while possessing the improved robustness and practical security in the high-speed case. Moreover, the rejection of the frame-calibrating part will intrinsically reduce the consumption of resources as well as the potential security flaws of practical MDI-QKD systems.

  10. Character and Implications of a Newly Identified Creeping Strand of the San Andreas fault NE of Salton Sea, Southern California

    Science.gov (United States)

    Janecke, S. U.; Markowski, D.

    2015-12-01

    The overdue earthquake on the Coachella section, San Andreas fault (SAF), the model ShakeOut earthquake, and the conflict between cross-fault models involving the Extra fault array and mapped shortening in the Durmid Hill area motivate new analyses at the southern SAF tip. Geologic mapping, LiDAR, seismic reflection, magnetic and gravity datasets, and aerial photography confirm the existence of the East Shoreline strand (ESS) of the SAF southwest of the main trace of the SAF. We mapped the 15 km long ESS, in a band northeast side of the Salton Sea. Other data suggest that the ESS continues N to the latitude of the Mecca Hills, and is >35 km long. The ESS cuts and folds upper Holocene beds and appears to creep, based on discovery of large NW-striking cracks in modern beach deposits. The two traces of the SAF are parallel and ~0.5 to ~2.5 km apart. Groups of east, SE, and ENE-striking strike-slip cross-faults connect the master dextral faults of the SAF. There are few sinistral-normal faults that could be part of the Extra fault array. The 1-km wide ESS contains short, discontinuous traces of NW-striking dextral-oblique faults. These en-echelon faults bound steeply dipping Pleistocene beds, cut out section, parallel tight NW-trending folds, and produced growth folds. Beds commonly dip toward the ESS on both sides, in accord with persistent NE-SW shortening across the ESS. The dispersed fault-fold structural style of the ESS is due to decollements in faulted mud-rich Pliocene to Holocene sediment and ramps and flats along the strike-slip faults. A sheared ladder-like geometric model of the two master dextral strands of the SAF and their intervening cross-faults, best explains the field relationships and geophysical datasets. Contraction across >40 km2 of the southernmost SAF zone in the Durmid Hills suggest that interaction of active structures in the SAF zone may inhibit the nucleation of large earthquakes in this region. The ESS may cross the northern Coachella

  11. Wind Power and Fault Clearance. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Vikesjoe, Johnny; Messing, Lars (Gothia Power (Sweden))

    2011-04-15

    The increased penetration of wind power will increase the impact of wind power on the grid and thereby increase the importance of a clear guidance concerning the requirements on the protection system of the wind power units and the grid protection in connection to wind power units. The protection system should be able to satisfy the grid connection requirements, set by the TSO (Transmission System Operator) and the grid owners, as well as the general safety and security requirements, such as; personal safety, operational security and economic insurance, i.e. an insurance against economic losses. Vindforsk has appointed Gothia Power AB to perform a study concerning the fault clearance function in connection to wind power installations. The study is divided into two parts; Part 1: The first stage of the project handled the present praxis for the protection, including investigation of legal requirements, operational requirement and personal safety requirement applicable to wind power applications. Proposals for protection requirement for wind power units and the connecting grid are given. Basically 'normal' fault clearance requirements regarding speed, selectivity and redundancy can be used also in applications in connection to wind power. Part 2: The second part of the project results in a guideline for design of protection systems in connection to wind power. In this report mainly part 2 is covered. The main focus is given to clearance of faults in the grid connecting the wind power plants. Regarding internal faults and critical operation states within the wind power plant, a short discussion of feasible protection functions is given. Some critical fault cases in the grid have been identified and discussed: - Undetected islanding and failure of reclosing. There can be a risk of undetected island operation. In such cases it is recommended to use controlled autoreclosing in the vicinity of wind power generation. - Unwanted disconnection of a healthy feeder

  12. A hybrid bit-encoding for SAT planning based on clique-partitioning

    Science.gov (United States)

    Tapia, Cristóbal; San Segundo, Pablo; Galán, Ramón

    2017-09-01

    Planning as satisfiability is one of the most efficient ways to solve classic automated planning problems. In SAT planning, the encoding used to convert the problem to a SAT formula is critical for the performance of the SAT solver. This paper presents a novel bit-encoding that reduces the number of bits required to represent actions in a SAT-based automated planning problem. To obtain such encoding we first build a conflict graph, which represents incompatibilities of pairs of actions, and bitwise encode the subsets of actions determined by a clique partition. This reduces the number of Boolean variables and clauses of the SAT encoding, while preserving the possibility of parallel execution of compatible (non-neighbor) actions. The article also describes an appropriate algorithm for selecting the clique partition for this application and compares the new encodings obtained over some standard planning problems.

  13. Low Complexity HEVC Encoder for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhaoqing Pan

    2015-12-01

    Full Text Available Visual sensor networks (VSNs can be widely applied in security surveillance, environmental monitoring, smart rooms, etc. However, with the increased number of camera nodes in VSNs, the volume of the visual information data increases significantly, which becomes a challenge for storage, processing and transmitting the visual data. The state-of-the-art video compression standard, high efficiency video coding (HEVC, can effectively compress the raw visual data, while the higher compression rate comes at the cost of heavy computational complexity. Hence, reducing the encoding complexity becomes vital for the HEVC encoder to be used in VSNs. In this paper, we propose a fast coding unit (CU depth decision method to reduce the encoding complexity of the HEVC encoder for VSNs. Firstly, the content property of the CU is analyzed. Then, an early CU depth decision method and a low complexity distortion calculation method are proposed for the CUs with homogenous content. Experimental results show that the proposed method achieves 71.91% on average encoding time savings for the HEVC encoder for VSNs.

  14. Fault morphology of the lyo Fault, the Median Tectonic Line Active Fault System

    OpenAIRE

    後藤, 秀昭

    1996-01-01

    In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ~ ...

  15. Fast and maliciously secure two-party computation using the GPU

    DEFF Research Database (Denmark)

    Frederiksen, Tore Kasper; Nielsen, Jesper Buus

    2013-01-01

    We describe, and implement, a maliciously secure protocol for two-party computation in a parallel computational model. Our protocol is based on Yao’s garbled circuit and an efficient OT extension. The implementation is done using CUDA and yields fast results for maliciously secure two-party compu......-party computation in a financially feasible and practical setting by using a consumer grade CPU and GPU. Our protocol further uses some novel constructions in order to combine garbled circuits and an OT extension in a parallel and maliciously secure setting.......We describe, and implement, a maliciously secure protocol for two-party computation in a parallel computational model. Our protocol is based on Yao’s garbled circuit and an efficient OT extension. The implementation is done using CUDA and yields fast results for maliciously secure two...

  16. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  17. Integration of InSAR and GIS in the Study of Surface Faults Caused by Subsidence-Creep-Fault Processes in Celaya, Guanajuato, Mexico

    International Nuclear Information System (INIS)

    Avila-Olivera, Jorge A.; Farina, Paolo; Garduno-Monroy, Victor H.

    2008-01-01

    In Celaya city, Subsidence-Creep-Fault Processes (SCFP) began to become visible at the beginning of the 1980s with the sprouting of the crackings that gave rise to the surface faults 'Oriente' and 'Poniente'. At the present time, the city is being affected by five surface faults that display a preferential NNW-SSE direction, parallel to the regional faulting system 'Taxco-San Miguel de Allende'. In order to study the SCFP in the city, the first step was to obtain a map of surface faults, by integrating in a GIS field survey and an urban city plan. The following step was to create a map of the current phreatic level decline in city with the information of deep wells and using the 'kriging' method in order to obtain a continuous surface. Finally the interferograms maps resulted of an InSAR analysis of 9 SAR images covering the time interval between July 12 of 2003 and May 27 of 2006 were integrated to a GIS. All the maps generated, show how the surface faults divide the city from North to South, in two zones that behave in a different way. The difference of the phreatic level decline between these two zones is 60 m; and the InSAR study revealed that the Western zone practically remains stable, while sinkings between the surface faults 'Oriente' and 'Universidad Pedagogica' are present, as well as in portions NE and SE of the city, all of these sinkings between 7 and 10 cm/year

  18. Integration of InSAR and GIS in the Study of Surface Faults Caused by Subsidence-Creep-Fault Processes in Celaya, Guanajuato, Mexico

    Science.gov (United States)

    Avila-Olivera, Jorge A.; Farina, Paolo; Garduño-Monroy, Victor H.

    2008-05-01

    In Celaya city, Subsidence-Creep-Fault Processes (SCFP) began to become visible at the beginning of the 1980s with the sprouting of the crackings that gave rise to the surface faults "Oriente" and "Poniente". At the present time, the city is being affected by five surface faults that display a preferential NNW-SSE direction, parallel to the regional faulting system "Taxco-San Miguel de Allende". In order to study the SCFP in the city, the first step was to obtain a map of surface faults, by integrating in a GIS field survey and an urban city plan. The following step was to create a map of the current phreatic level decline in city with the information of deep wells and using the "kriging" method in order to obtain a continuous surface. Finally the interferograms maps resulted of an InSAR analysis of 9 SAR images covering the time interval between July 12 of 2003 and May 27 of 2006 were integrated to a GIS. All the maps generated, show how the surface faults divide the city from North to South, in two zones that behave in a different way. The difference of the phreatic level decline between these two zones is 60 m; and the InSAR study revealed that the Western zone practically remains stable, while sinkings between the surface faults "Oriente" and "Universidad Pedagógica" are present, as well as in portions NE and SE of the city, all of these sinkings between 7 and 10 cm/year.

  19. Influence of crystallised igneous intrusions on fault nucleation and reactivation during continental extension

    Science.gov (United States)

    Magee, Craig; McDermott, Kenneth G.; Stevenson, Carl T. E.; Jackson, Christopher A.-L.

    2014-05-01

    Continental rifting is commonly accommodated by the nucleation of normal faults, slip on pre-existing fault surfaces and/or magmatic intrusion. Because crystallised igneous intrusions are pervasive in many rift basins and are commonly more competent (i.e. higher shear strengths and Young's moduli) than the host rock, it is theoretically plausible that they locally intersect and modify the mechanical properties of pre-existing normal faults. We illustrate the influence that crystallised igneous intrusions may have on fault reactivation using a conceptual model and observations from field and subsurface datasets. Our results show that igneous rocks may initially resist failure, and promote the preferential reactivation of favourably-oriented, pre-existing faults that are not spatially-associated with solidified intrusions. Fault segments situated along strike from laterally restricted fault-intrusion intersections may similarly be reactivated. This spatial and temporal control on strain distribution may generate: (1) supra-intrusion folds in the hanging wall; (2) new dip-slip faults adjacent to the igneous body; or (3) sub-vertical, oblique-slip faults oriented parallel to the extension direction. Importantly, stress accumulation within igneous intrusions may eventually initiate failure and further localise strain. The results of our study have important implications for the structural of sedimentary basins and the subsurface migration of hydrocarbons and mineral-bearing fluids.

  20. Intra-arc Seismicity: Geometry and Kinematic Constraints of Active Faulting along Northern Liquiñe-Ofqui and Andean Transverse Fault Systems [38º and 40ºS, Southern Andes

    Science.gov (United States)

    Sielfeld, G.; Lange, D.; Cembrano, J. M.

    2017-12-01

    Intra-arc crustal seismicity documents the schizosphere tectonic state along active magmatic arcs. At oblique-convergent margins, a significant portion of bulk transpressional deformation is accommodated in intra-arc regions, as a consequence of stress and strain partitioning. Simultaneously, crustal fluid migration mechanisms may be controlled by the geometry and kinematics of crustal high strain domains. In such domains shallow earthquakes have been associated with either margin-parallel strike-slip faults or to volcano-tectonic activity. However, very little is known on the nature and kinematics of Southern Andes intra-arc crustal seismicity and its relation with crustal faults. Here we present results of a passive seismicity study based on 16 months of data collected from 33 seismometers deployed along the intra-arc region of Southern Andes between 38˚S and 40˚S. This region is characterized by a long-lived interplay among margin-parallel strike-slip faults (Liquiñe-Ofqui Fault System, LOFS), second order Andean-transverse-faults (ATF), volcanism and hydrothermal activity. Seismic signals recorded by our network document small magnitude (0.2P and 2,796 S phase arrival times have been located with NonLinLoc. First arrival polarities and amplitude ratios of well-constrained events, were used for focal mechanism inversion. Local seismicity occurs at shallow levels down to depth of ca. 16 km, associated either with stratovolcanoes or to master, N10˚E, and subsidiary, NE to ENE, striking branches of the LOFS. Strike-slip focal mechanisms are consistent with the long-term kinematics documented by field structural-geology studies. Unexpected, well-defined NW-SE elongated clusters are also reported. In particular, a 72-hour-long, N60˚W-oriented seismicity swarm took place at Caburgua Lake area, describing a ca. 36x12x1km3 faulting crustal volume. Results imply a unique snapshot on shallow crustal tectonics, contributing to the understanding of faulting processes

  1. The Evergreen basin and the role of the Silver Creek fault in the San Andreas fault system, San Francisco Bay region, California

    Science.gov (United States)

    Jachens, Robert C.; Wentworth, Carl M.; Graymer, Russell W.; Williams, Robert; Ponce, David A.; Mankinen, Edward A.; Stephenson, William J.; Langenheim, Victoria

    2017-01-01

    The Evergreen basin is a 40-km-long, 8-km-wide Cenozoic sedimentary basin that lies mostly concealed beneath the northeastern margin of the Santa Clara Valley near the south end of San Francisco Bay (California, USA). The basin is bounded on the northeast by the strike-slip Hayward fault and an approximately parallel subsurface fault that is structurally overlain by a set of west-verging reverse-oblique faults which form the present-day southeastward extension of the Hayward fault. It is bounded on the southwest by the Silver Creek fault, a largely dormant or abandoned fault that splays from the active southern Calaveras fault. We propose that the Evergreen basin formed as a strike-slip pull-apart basin in the right step from the Silver Creek fault to the Hayward fault during a time when the Silver Creek fault served as a segment of the main route by which slip was transferred from the central California San Andreas fault to the Hayward and other East Bay faults. The dimensions and shape of the Evergreen basin, together with palinspastic reconstructions of geologic and geophysical features surrounding it, suggest that during its lifetime, the Silver Creek fault transferred a significant portion of the ∼100 km of total offset accommodated by the Hayward fault, and of the 175 km of total San Andreas system offset thought to have been accommodated by the entire East Bay fault system. As shown previously, at ca. 1.5–2.5 Ma the Hayward-Calaveras connection changed from a right-step, releasing regime to a left-step, restraining regime, with the consequent effective abandonment of the Silver Creek fault. This reorganization was, perhaps, preceded by development of the previously proposed basin-bisecting Mount Misery fault, a fault that directly linked the southern end of the Hayward fault with the southern Calaveras fault during extinction of pull-apart activity. Historic seismicity indicates that slip below a depth of 5 km is mostly transferred from the Calaveras

  2. Development of Hydrologic Characterization Technology of Fault Zones

    International Nuclear Information System (INIS)

    Karasaki, Kenzi; Onishi, Tiemi; Wu, Yu-Shu

    2008-01-01

    Through an extensive literature survey we find that there is very limited amount of work on fault zone hydrology, particularly in the field using borehole testing. The common elements of a fault include a core, and damage zones. The core usually acts as a barrier to the flow across it, whereas the damage zone controls the flow either parallel to the strike or dip of a fault. In most of cases the damage zone is the one that is controlling the flow in the fault zone and the surroundings. The permeability of damage zone is in the range of two to three orders of magnitude higher than the protolith. The fault core can have permeability up to seven orders of magnitude lower than the damage zone. The fault types (normal, reverse, and strike-slip) by themselves do not appear to be a clear classifier of the hydrology of fault zones. However, there still remains a possibility that other additional geologic attributes and scaling relationships can be used to predict or bracket the range of hydrologic behavior of fault zones. AMT (Audio frequency Magneto Telluric) and seismic reflection techniques are often used to locate faults. Geochemical signatures and temperature distributions are often used to identify flow domains and/or directions. ALSM (Airborne Laser Swath Mapping) or LIDAR (Light Detection and Ranging) method may prove to be a powerful tool for identifying lineaments in place of the traditional photogrammetry. Nonetheless not much work has been done to characterize the hydrologic properties of faults by directly testing them using pump tests. There are some uncertainties involved in analyzing pressure transients of pump tests: both low permeability and high permeability faults exhibit similar pressure responses. A physically based conceptual and numerical model is presented for simulating fluid and heat flow and solute transport through fractured fault zones using a multiple-continuum medium approach. Data from the Horonobe URL site are analyzed to demonstrate the

  3. Development of Hydrologic Characterization Technology of Fault Zones

    Energy Technology Data Exchange (ETDEWEB)

    Karasaki, Kenzi; Onishi, Tiemi; Wu, Yu-Shu

    2008-03-31

    Through an extensive literature survey we find that there is very limited amount of work on fault zone hydrology, particularly in the field using borehole testing. The common elements of a fault include a core, and damage zones. The core usually acts as a barrier to the flow across it, whereas the damage zone controls the flow either parallel to the strike or dip of a fault. In most of cases the damage zone isthe one that is controlling the flow in the fault zone and the surroundings. The permeability of damage zone is in the range of two to three orders of magnitude higher than the protolith. The fault core can have permeability up to seven orders of magnitude lower than the damage zone. The fault types (normal, reverse, and strike-slip) by themselves do not appear to be a clear classifier of the hydrology of fault zones. However, there still remains a possibility that other additional geologic attributes and scaling relationships can be used to predict or bracket the range of hydrologic behavior of fault zones. AMT (Audio frequency Magneto Telluric) and seismic reflection techniques are often used to locate faults. Geochemical signatures and temperature distributions are often used to identify flow domains and/or directions. ALSM (Airborne Laser Swath Mapping) or LIDAR (Light Detection and Ranging) method may prove to be a powerful tool for identifying lineaments in place of the traditional photogrammetry. Nonetheless not much work has been done to characterize the hydrologic properties of faults by directly testing them using pump tests. There are some uncertainties involved in analyzing pressure transients of pump tests: both low permeability and high permeability faults exhibit similar pressure responses. A physically based conceptual and numerical model is presented for simulating fluid and heat flow and solute transport through fractured fault zones using a multiple-continuum medium approach. Data from the Horonobe URL site are analyzed to demonstrate the

  4. Evaluating failure rate of fault-tolerant multistage interconnection networks using Weibull life distribution

    International Nuclear Information System (INIS)

    Bistouni, Fathollah; Jahanshahi, Mohsen

    2015-01-01

    Fault-tolerant multistage interconnection networks (MINs) play a vital role in the performance of multiprocessor systems where reliability evaluation becomes one of the main concerns in analyzing these networks properly. In many cases, the primary objective in system reliability analysis is to compute a failure distribution of the entire system according to that of its components. However, since the problem is known to be NP-hard, in none of the previous efforts, the precise evaluation of the system failure rate has been performed. Therefore, our goal is to investigate this parameter for different fault-tolerant MINs using Weibull life distribution that is one of the most commonly used distributions in reliability. In this paper, four important groups of fault-tolerant MINs will be examined to find the best fault-tolerance techniques in terms of failure rate; (1) Extra-stage MINs, (2) Parallel MINs, (3) Rearrangeable non-blocking MINs, and (4) Replicated MINs. This paper comprehensively analyzes all perspectives of the reliability (terminal, broadcast, and network reliability). Moreover, in this study, all reliability equations are calculated for different network sizes. - Highlights: • The failure rate of different MINs is analyzed by using Weibull life distribution. • This article tries to find the best fault-tolerance technique in the field of MINs. • Complex series-parallel RBDs are used to determine the reliability of the MINs. • All aspects of the reliability (i.e. terminal, broadcast, and network) are analyzed. • All reliability equations will be calculated for different size N×N.

  5. Secure PVM

    Energy Technology Data Exchange (ETDEWEB)

    Dunigan, T.H.; Venugopal, N.

    1996-09-01

    This research investigates techniques for providing privacy, authentication, and data integrity to PVM (Parallel Virtual Machine). PVM is extended to provide secure message passing with no changes to the user`s PVM application, or, optionally, security can be provided on a message-by message basis. Diffe-Hellman is used for key distribution of a single session key for n-party communication. Keyed MD5 is used for message authentication, and the user may select from various secret-key encryption algorithms for message privacy. The modifications to PVM are described, and the performance of secure PVM is evaluated.

  6. Stacking faults on (001) in transition-metal disilicides with the C11b structure

    International Nuclear Information System (INIS)

    Ito, K.; Nakamoto, T.; Inui, H.; Yamaguchi, M.

    1997-01-01

    Stacking faults on (001) in MoSi 2 and WSi 2 with the C11 b structure have been characterized by transmission electron microscopy (TEM), using their single crystals grown by the floating-zone method. Although WSi 2 contains a high density of stacking faults, only several faults are observed in MoSi 2 . For both crystals, (001) faults are characterized to be of the Frank-type in which two successive (001) Si layers are removed from the lattice, giving rise to a displacement vector parallel to [001]. When the displacement vector of faults is expressed in the form of R = 1/n[001], however, their n values are slightly deviated from the exact value of 3, because of dilatation of the lattice in the direction perpendicular to the fault, which is caused by the repulsive interaction between Mo (W) layers above and below the fault. Matching of experimental high-resolution TEM images with calculated ones indicates n values to be 3.12 ± 0.10 and 3.34 ± 0.10 for MoSi 2 and WSi 2 , respectively

  7. Ten kilometer vertical Moho offset and shallow velocity contrast along the Denali fault zone from double-difference tomography, receiver functions, and fault zone head waves

    Science.gov (United States)

    Allam, A. A.; Schulte-Pelkum, V.; Ben-Zion, Y.; Tape, C.; Ruppert, N.; Ross, Z. E.

    2017-11-01

    We examine the structure of the Denali fault system in the crust and upper mantle using double-difference tomography, P-wave receiver functions, and analysis (spatial distribution and moveout) of fault zone head waves. The three methods have complementary sensitivity; tomography is sensitive to 3D seismic velocity structure but smooths sharp boundaries, receiver functions are sensitive to (quasi) horizontal interfaces, and fault zone head waves are sensitive to (quasi) vertical interfaces. The results indicate that the Mohorovičić discontinuity is vertically offset by 10 to 15 km along the central 600 km of the Denali fault in the imaged region, with the northern side having shallower Moho depths around 30 km. An automated phase picker algorithm is used to identify 1400 events that generate fault zone head waves only at near-fault stations. At shorter hypocentral distances head waves are observed at stations on the northern side of the fault, while longer propagation distances and deeper events produce head waves on the southern side. These results suggest a reversal of the velocity contrast polarity with depth, which we confirm by computing average 1D velocity models separately north and south of the fault. Using teleseismic events with M ≥ 5.1, we obtain 31,400 P receiver functions and apply common-conversion-point stacking. The results are migrated to depth using the derived 3D tomography model. The imaged interfaces agree with the tomography model, showing a Moho offset along the central Denali fault and also the sub-parallel Hines Creek fault, a suture zone boundary 30 km to the north. To the east, this offset follows the Totschunda fault, which ruptured during the M7.9 2002 earthquake, rather than the Denali fault itself. The combined results suggest that the Denali fault zone separates two distinct crustal blocks, and that the Totschunda and Hines Creeks segments are important components of the fault and Cretaceous-aged suture zone structure.

  8. Quantum key distribution using basis encoding of Gaussian-modulated coherent states

    Science.gov (United States)

    Huang, Peng; Huang, Jingzheng; Zhang, Zheshen; Zeng, Guihua

    2018-04-01

    The continuous-variable quantum key distribution (CVQKD) has been demonstrated to be available in practical secure quantum cryptography. However, its performance is restricted strongly by the channel excess noise and the reconciliation efficiency. In this paper, we present a quantum key distribution (QKD) protocol by encoding the secret keys on the random choices of two measurement bases: the conjugate quadratures X and P . The employed encoding method can dramatically weaken the effects of channel excess noise and reconciliation efficiency on the performance of the QKD protocol. Subsequently, the proposed scheme exhibits the capability to tolerate much higher excess noise and enables us to reach a much longer secure transmission distance even at lower reconciliation efficiency. The proposal can work alternatively to strengthen significantly the performance of the known Gaussian-modulated CVQKD protocol and serve as a multiplier for practical secure quantum cryptography with continuous variables.

  9. Hardwired interlock system with fault latchability and annunciation panel for electron accelerators

    International Nuclear Information System (INIS)

    Mukesh Kumar; Roychoudhury, P.; Nimje, V.T.

    2011-01-01

    A hard-wired interlock system is designed, developed, installed and tested to ensure healthy status for interlock signals, coming from the various sub-systems of electron accelerators as digital inputs. Each electron accelerator has approximately ninety-six interlock signals. Hardwired Interlock system consists of twelve-channel 19 inches rack mountable hard-wired interlock module of 4U height. Digital inputs are fed to the hard-wired interlock module in the form of 24V dc for logic 'TRUE' and 0V for logic 'FALSE'. These signals are flow signals to ensure cooling of the various sub-systems, signals from the klystron modulator system in RF Linac to ensure its healthy state to start, signals from high voltage system of DC accelerator, vacuum signals from vacuum system to ensure proper vacuum in the electron accelerator, door interlock signals, air flow signals, and area search and secure signals. This hard-wired interlock system ensures the safe start-up, fault annunciation and alarm, fault latchablity, and fail-safe operation of the electron accelerators. Safe start-up feature ensures that beam generation system can be made ON only when cooling of all the electron accelerator sub-systems are confirmed, all the fault signals of high voltage generation system are attended, proper vacuum is achieved inside the beam transport system, all the doors are closed and various areas have been searched and secured manually. Fault annunciation and alarm feature ensures that during the start up and operation of the electron accelerators, if any fault is there, that fault signal window keeps on flashing with red colour and alarm is sounded till the operator acknowledges the fault. Once acknowledged, flashing and alarm stops but display of the window in red colour remains till the operator clears the fault. Fault latchability feature ensures that if any fault has happened, accelerator cannot be started again till the operator resets that interlock signal. Fail-safe feature ensures

  10. Design & Evaluation of a Protection Algorithm for a Wind Turbine Generator based on the fault-generated Symmetrical Components

    DEFF Research Database (Denmark)

    Zheng, T. Y.; Cha, Seung-Tae; Lee, B. E.

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on the fault-generated symmetrical components is proposed in the paper. At stage 1, the relay uses the magnitude of the positive-sequence component in the fault current to distinguish faults on a parallel WTG, connected to the same feeder......, or on an adjacent feeder from those on the connected feeder, on the collection bus, at an inter-tie or at a grid. For the former faults, the relay should remain stable and inoperative whilst the instantaneous or delayed tripping is required for the latter faults. At stage 2, the fault type is first evaluated using...... the relationships of the fault-generated symmetrical components. Then, the magnitude of the positive-sequence component in the fault current is used again to decide on either instantaneous or delayed operation. The operating performance of the relay is then verified using various fault scenarios modelled using...

  11. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  12. Frequency Based Fault Detection in Wind Turbines

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2014-01-01

    In order to obtain lower cost of energy for wind turbines fault detection and accommodation is important. Expensive condition monitoring systems are often used to monitor the condition of rotating and vibrating system parts. One example is the gearbox in a wind turbine. This system is operated...... in parallel to the control system, using different computers and additional often expensive sensors. In this paper a simple filter based algorithm is proposed to detect changes in a resonance frequency in a system, exemplified with faults resulting in changes in the resonance frequency in the wind turbine...... gearbox. Only the generator speed measurement which is available in even simple wind turbine control systems is used as input. Consequently this proposed scheme does not need additional sensors and computers for monitoring the condition of the wind gearbox. The scheme is evaluated on a wide-spread wind...

  13. Trinary signed-digit arithmetic using an efficient encoding scheme

    Science.gov (United States)

    Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.

    2000-09-01

    The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  14. Improving parallel imaging by jointly reconstructing multi-contrast data.

    Science.gov (United States)

    Bilgic, Berkin; Kim, Tae Hyung; Liao, Congyu; Manhard, Mary Kate; Wald, Lawrence L; Haldar, Justin P; Setsompop, Kawin

    2018-08-01

    To develop parallel imaging techniques that simultaneously exploit coil sensitivity encoding, image phase prior information, similarities across multiple images, and complementary k-space sampling for highly accelerated data acquisition. We introduce joint virtual coil (JVC)-generalized autocalibrating partially parallel acquisitions (GRAPPA) to jointly reconstruct data acquired with different contrast preparations, and show its application in 2D, 3D, and simultaneous multi-slice (SMS) acquisitions. We extend the joint parallel imaging concept to exploit limited support and smooth phase constraints through Joint (J-) LORAKS formulation. J-LORAKS allows joint parallel imaging from limited autocalibration signal region, as well as permitting partial Fourier sampling and calibrationless reconstruction. We demonstrate highly accelerated 2D balanced steady-state free precession with phase cycling, SMS multi-echo spin echo, 3D multi-echo magnetization-prepared rapid gradient echo, and multi-echo gradient recalled echo acquisitions in vivo. Compared to conventional GRAPPA, proposed joint acquisition/reconstruction techniques provide more than 2-fold reduction in reconstruction error. JVC-GRAPPA takes advantage of additional spatial encoding from phase information and image similarity, and employs different sampling patterns across acquisitions. J-LORAKS achieves a more parsimonious low-rank representation of local k-space by considering multiple images as additional coils. Both approaches provide dramatic improvement in artifact and noise mitigation over conventional single-contrast parallel imaging reconstruction. Magn Reson Med 80:619-632, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  15. Smart security and securing data through watermarking

    Science.gov (United States)

    Singh, Ritesh; Kumar, Lalit; Banik, Debraj; Sundar, S.

    2017-11-01

    The growth of image processing in embedded system has provided the boon of enhancing the security in various sectors. This lead to the developing of various protective strategies, which will be needed by private or public sectors for cyber security purposes. So, we have developed a method which uses digital water marking and locking mechanism for the protection of any closed premises. This paper describes a contemporary system based on user name, user id, password and encryption technique which can be placed in banks, protected offices to beef the security up. The burglary can be abated substantially by using a proactive safety structure. In this proposed framework, we are using water-marking in spatial domain to encode and decode the image and PIR(Passive Infrared Sensor) sensor to detect the existence of person in any close area.

  16. Nuclear Power Plants Fault Diagnosis Method Based on Data Fusion

    International Nuclear Information System (INIS)

    Xie Chunli; Liu Yongkuo; Xia Hong

    2009-01-01

    The data fusion is a method suit for complex system fault diagnosis such as nuclear power plants, which is multisource information processing technology. This paper uses data fusion information hierarchical thinking and divides nuclear power plants fault diagnosis into three levels. Data level adopts data mining method to handle data and reduction attributes. Feature level uses three parallel neural networks to deal with attributes of data level reduction and the outputs of three networks are as the basic probability assignment of Dempster-Shafer (D-S) evidence theory. The improved D-S evidence theory synthesizes the outputs of neural networks in decision level, which conquer the traditional D-S evidence theory limitation which can't dispose conflict information. The diagnosis method was tested using correlation data of literature. The test results indicate that the data fusion diagnosis system can diagnose nuclear power plants faults accurately and the method has application value. (authors)

  17. 3D Strain Modelling of Tear Fault Analogues

    Science.gov (United States)

    Hindle, D.; Vietor, T.

    2005-12-01

    Tear faults can be described as vertical discontinuities, with near fault parallel displacements terminating on some sort of shallow detachment. As such, they are difficult to study in "cross section" i.e. 2 dimensions as is often the case for fold-thrust systems. Hence, little attempt has been made to model the evolution of strain around tear faults and the processes of strain localisation in such structures due to the necessity of describing these systems in 3 dimensions and the problems this poses for both numerical and analogue modelling. Field studies suggest that strain in such regions can be distributed across broad zones on minor tear systems, which are often not easily mappable. Such strain is probably assumed to be due to distributed strain and to displacement gradients which are themselves necessary for the initiation of the tear itself. We present a numerical study of the effects of a sharp, basal discontinutiy parallel to the transport direction in a shortening wedge of material. The discontinuity is represented by two adjacent basal surfaces with strongly contrasting (0.5 and 0.05) friction coefficient. The material is modelled using PFC3D distinct element software for simulating granular material, whose properties are chosen to simulate upper crustal, sedimentary rock. The model geometry is a rectangular bounding box, 2km x 1km, and 0.35-0.5km deep, with a single, driving wall of constant velocity. We show the evolution of strain in the model in horizontal and vertical sections, and interpret strain localization as showing the spontaneous development of tear fault like features. The strain field in the model is asymmetrical, rotated towards the strong side of the model. Strain increments seem to oscillate in time, suggesting achievement of a steady state. We also note that our model cannot be treated as a critical wedge, since the 3rd dimension and the lateral variations of strength rule out this type of 2D approximation.

  18. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-01

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  19. Information transfer via implicit encoding with delay time modulation in a time-delay system

    Energy Technology Data Exchange (ETDEWEB)

    Kye, Won-Ho, E-mail: whkye@kipo.go.kr [Korean Intellectual Property Office, Government Complex Daejeon Building 4, 189, Cheongsa-ro, Seo-gu, Daejeon 302-701 (Korea, Republic of)

    2012-08-20

    A new encoding scheme for information transfer with modulated delay time in a time-delay system is proposed. In the scheme, the message is implicitly encoded into the modulated delay time. The information transfer rate as a function of encoding redundancy in various noise scales is presented and it is analyzed that the implicit encoding scheme (IES) has stronger resistance against channel noise than the explicit encoding scheme (EES). In addition, its advantages in terms of secure communication and feasible applications are discussed. -- Highlights: ► We propose new encoding scheme with delay time modulation. ► The message is implicitly encoded with modulated delay time. ► The proposed scheme shows stronger resistance against channel noise.

  20. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  1. Qademah Fault 3D Survey

    KAUST Repository

    Hanafy, Sherif M.

    2014-01-01

    Objective: Collect 3D seismic data at Qademah Fault location to 1. 3D traveltime tomography 2. 3D surface wave migration 3. 3D phase velocity 4. Possible reflection processing Acquisition Date: 26 – 28 September 2014 Acquisition Team: Sherif, Kai, Mrinal, Bowen, Ahmed Acquisition Layout: We used 288 receiver arranged in 12 parallel lines, each line has 24 receiver. Inline offset is 5 m and crossline offset is 10 m. One shot is fired at each receiver location. We use the 40 kgm weight drop as seismic source, with 8 to 15 stacks at each shot location.

  2. Geospatial Information Service System Based on GeoSOT Grid & Encoding

    Directory of Open Access Journals (Sweden)

    LI Shizhong

    2016-12-01

    Full Text Available With the rapid development of the space and earth observation technology, it is important to establish a multi-source, multi-scale and unified cross-platform reference for global data. In practice, the production and maintenance of geospatial data are scattered in different units, and the standard of the data grid varies between departments and systems. All these bring out the disunity of standards among different historical periods or orgnizations. Aiming at geospatial information security library for the national high resolution earth observation, there are some demands for global display, associated retrieval and template applications and other integrated services for geospatial data. Based on GeoSOT grid and encoding theory system, "geospatial information security library information of globally unified grid encoding management" data subdivision organization solutions have been proposed; system-level analyses, researches and designs have been carried out. The experimental results show that the data organization and management method based on GeoSOT can significantly improve the overall efficiency of the geospatial information security service system.

  3. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    International Nuclear Information System (INIS)

    Cumbest, R.J.

    2000-01-01

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

  4. Fault Diagnosis of Car Engine by Using a Novel GA-Based Extension Recognition Method

    Directory of Open Access Journals (Sweden)

    Meng-Hui Wang

    2014-01-01

    Full Text Available Due to the passenger’s security, the recognized hidden faults in car engines are the most important work for a maintenance engineer, so they can regulate the engines to be safe and improve the reliability of automobile systems. In this paper, we will present a novel fault recognition method based on the genetic algorithm (GA and the extension theory and also apply this method to the fault recognition of a practical car engine. The proposed recognition method has been tested on the Nissan Cefiro 2.0 engine and has also been compared to other traditional classification methods. Experimental results are of great effect regarding the hidden fault recognition of car engines, and the proposed method can also be applied to other industrial apparatus.

  5. Transmission Level High Temperature Superconducting Fault Current Limiter

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Gary [SuperPower, Inc., Schenectady, NY (United States)

    2016-10-05

    The primary objective of this project was to demonstrate the feasibility and reliability of utilizing high-temperature superconducting (HTS) materials in a Transmission Level Superconducting Fault Current Limiter (SFCL) application. During the project, the type of high-temperature superconducting material used evolved from 1st generation (1G) BSCCO-2212 melt cast bulk high-temperature superconductors to 2nd generation (2G) YBCO-based high-temperature superconducting tape. The SFCL employed SuperPower's “Matrix” technology, that offers modular features to enable scale up to transmission voltage levels. The SFCL consists of individual modules that contain elements and parallel inductors that assist in carrying the current during the fault. A number of these modules are arranged in an m x n array to form the current-limiting matrix.

  6. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    Science.gov (United States)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  7. A coverage and slicing dependencies analysis for seeking software security defects.

    Science.gov (United States)

    He, Hui; Zhang, Dongyan; Liu, Min; Zhang, Weizhe; Gao, Dongmin

    2014-01-01

    Software security defects have a serious impact on the software quality and reliability. It is a major hidden danger for the operation of a system that a software system has some security flaws. When the scale of the software increases, its vulnerability has becoming much more difficult to find out. Once these vulnerabilities are exploited, it may lead to great loss. In this situation, the concept of Software Assurance is carried out by some experts. And the automated fault localization technique is a part of the research of Software Assurance. Currently, automated fault localization method includes coverage based fault localization (CBFL) and program slicing. Both of the methods have their own location advantages and defects. In this paper, we have put forward a new method, named Reverse Data Dependence Analysis Model, which integrates the two methods by analyzing the program structure. On this basis, we finally proposed a new automated fault localization method. This method not only is automation lossless but also changes the basic location unit into single sentence, which makes the location effect more accurate. Through several experiments, we proved that our method is more effective. Furthermore, we analyzed the effectiveness among these existing methods and different faults.

  8. The geometry of pull-apart basins in the southern part of Sumatran strike-slip fault zone

    Science.gov (United States)

    Aribowo, Sonny

    2018-02-01

    Models of pull-apart basin geometry have been described by many previous studies in a variety tectonic setting. 2D geometry of Ranau Lake represents a pull-apart basin in the Sumatran Fault Zone. However, there are unclear geomorphic traces of two sub-parallel overlapping strike-slip faults in the boundary of the lake. Nonetheless, clear geomorphic traces that parallel to Kumering Segment of the Sumatran Fault are considered as inactive faults in the southern side of the lake. I demonstrate the angular characteristics of the Ranau Lake and Suoh complex pull-apart basins and compare with pull-apart basin examples from published studies. I use digital elevation model (DEM) image to sketch the shape of the depression of Ranau Lake and Suoh Valley and measure 2D geometry of pull-apart basins. This study shows that Ranau Lake is not a pull-apart basin, and the pull-apart basin is actually located in the eastern side of the lake. Since there is a clear connection between pull-apart basin and volcanic activity in Sumatra, I also predict that the unclear trace of the pull-apart basin near Ranau Lake may be covered by Ranau Caldera and Seminung volcanic products.

  9. Quantum Secure Direct Communication by Using Three-Dimensional Hyperentanglement

    International Nuclear Information System (INIS)

    Shi Jin; Gong Yanxiao; Xu Ping; Zhu Shining; Zhan Youbang

    2011-01-01

    We propose two schemes for realizing quantum secure direct communication (QSDC) by using a set of ordered two-photon three-dimensional hyperentangled states entangled in two degrees of freedom (DOFs) as quantum information channels. In the first scheme, the photons from Bob to Alice are transmitted only once. After insuring the security of the quantum channels, Bob encodes the secret message on his photons. Then Alice performs single-photon two-DOF Bell bases measurements on her photons. This scheme has better security than former QSDC protocols. In the second scheme, Bob transmits photons to Alice twice. After insuring the security of the quantum channels, Bob encodes the secret message on his photons. Then Alice performs two-photon Bell bases measurements on each DOF. The scheme has more information capacity than former QSDC protocols. (general)

  10. Fracture Modes and Identification of Fault Zones in Wenchuan Earthquake Fault Scientific Drilling Boreholes

    Science.gov (United States)

    Deng, C.; Pan, H.; Zhao, P.; Qin, R.; Peng, L.

    2017-12-01

    After suffering from the disaster of Wenchuan earthquake on May 12th, 2008, scientists are eager to figure out the structure of formation, the geodynamic processes of faults and the mechanism of earthquake in Wenchuan by drilling five holes into the Yingxiu-Beichuan fault zone and Anxian-Guanxian fault zone. Fractures identification and in-situ stress determination can provide abundant information for formation evaluation and earthquake study. This study describe all the fracture modes in the five boreholes on the basis of cores and image logs, and summarize the response characteristics of fractures in conventional logs. The results indicate that the WFSD boreholes encounter enormous fractures, including natural fractures and induced fractures, and high dip-angle conductive fractures are the most common fractures. The maximum horizontal stress trends along the borehole are deduced as NWW-SEE according to orientations of borehole breakouts and drilling-induced fractures, which is nearly parallel to the strikes of the younger natural fracture sets. Minor positive deviations of AC (acoustic log) and negative deviation of DEN (density log) demonstrate their responses to fracture, followed by CNL (neutron log), resistivity logs and GR (gamma ray log) at different extent of intensity. Besides, considering the fact that the reliable methods for identifying fracture zone, like seismic, core recovery and image logs, can often be hampered by their high cost and limited application, this study propose a method by using conventional logs, which are low-cost and available in even old wells. We employ wavelet decomposition to extract the high frequency information of conventional logs and reconstruction a new log in special format of enhance fracture responses and eliminate nonfracture influence. Results reveal that the new log shows obvious deviations in fault zones, which confirm the potential of conventional logs in fracture zone identification.

  11. Structural Mapping Along the Central San Andreas Fault-zone Using Airborne Electromagnetics

    Science.gov (United States)

    Zamudio, K. D.; Bedrosian, P.; Ball, L. B.

    2017-12-01

    Investigations of active fault zones typically focus on either surface expressions or the associated seismogenic zones. However, the largely aseismic upper kilometer can hold significant insight into fault-zone architecture, strain partitioning, and fault-zone permeability. Geophysical imaging of the first kilometer provides a link between surface fault mapping and seismically-defined fault zones and is particularly important in geologically complex regions with limited surface exposure. Additionally, near surface imaging can provide insight into the impact of faulting on the hydrogeology of the critical zone. Airborne electromagnetic (AEM) methods offer a unique opportunity to collect a spatially-large, detailed dataset in a matter of days, and are used to constrain subsurface resistivity to depths of 500 meters or more. We present initial results from an AEM survey flown over a 60 kilometer long segment of the central San Andreas Fault (SAF). The survey is centered near Parkfield, California, the site of the SAFOD drillhole, which marks the transition between a creeping fault segment to the north and a locked zone to the south. Cross sections with a depth of investigation up to approximately 500 meters highlight the complex Tertiary and Mesozoic geology that is dismembered by the SAF system. Numerous fault-parallel structures are imaged across a more than 10 kilometer wide zone centered on the surface trace. Many of these features can be related to faults and folds within Plio-Miocene sedimentary rocks found on both sides of the fault. Northeast of the fault, rocks of the Mesozoic Franciscan and Great Valley complexes are extremely heterogeneous, with highly resistive volcanic rocks within a more conductive background. The upper 300 meters of a prominent fault-zone conductor, previously imaged to 1-3 kilometers depth by magnetotellurics, is restricted to a 20 kilometer long segment of the fault, but is up to 4 kilometers wide in places. Elevated fault

  12. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    Science.gov (United States)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  13. Geology and structure of the North Boqueron Bay-Punta Montalva Fault System

    Science.gov (United States)

    Roig Silva, Coral Marie

    The North Boqueron Bay-Punta Montalva Fault Zone is an active fault system that cuts across the Lajas Valley in southwestern Puerto Rico. The fault zone has been recognized and mapped based upon detailed analysis of geophysical data, satellite images and field mapping. The fault zone consists of a series of Cretaceous bedrock faults that reactivated and deformed Miocene limestone and Quaternary alluvial fan sediments. The fault zone is seismically active (ML < 5.0) with numerous locally felt earthquakes. Focal mechanism solutions and structural field data suggest strain partitioning with predominantly east-west left-lateral displacements with small normal faults oriented mostly toward the northeast. Evidence for recent displacement consists of fractures and small normal faults oriented mostly northeast found in intermittent streams that cut through the Quaternary alluvial fan deposits along the southern margin of the Lajas Valley, Areas of preferred erosion, within the alluvial fan, trend toward the west-northwest parallel to the on-land projection of the North Boqueron Bay Fault. Beyond the faulted alluvial fan and southeast of the Lajas Valley, the Northern Boqueron Bay Fault joins with the Punta Montalva Fault. The Punta Montalva Fault is defined by a strong topographic WNW lineament along which stream channels are displaced left laterally 200 meters and Miocene strata are steeply tilted to the south. Along the western end of the fault zone in northern Boqueron Bay, the older strata are only tilted 3° south and are covered by flat lying Holocene sediments. Focal mechanisms solutions along the western end suggest NW-SE shortening, which is inconsistent with left lateral strain partitioning along the fault zone. The limited deformation of older strata and inconsistent strain partitioning may be explained by a westerly propagation of the fault system from the southwest end. The limited geomorphic structural expression along the North Boqueron Bay Fault segment

  14. Implementation of superconducting fault current limiter for flexible operation in the power substation

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chong Suk, E-mail: chong_suk@korea.ac.kr [School of Electrical Engineering, Korea University, Anam dong, Seonbukgu, Seoul 136-713 (Korea, Republic of); Lee, Hansang [School of Railway and Electrical Engineering, Kyungil University, Hayang-eup, Gyeongsan-si, Gyeongsangbuk-do 712-701 (Korea, Republic of); Cho, Yoon-sung [Department of Electric and Energy Engineering, Catholic University of Daegu, Hayang-eup, Gyeongsan-si, Gyeongsangbuk-do 712-702 (Korea, Republic of); Suh, Jaewan [School of Electrical Engineering, Korea University, Anam dong, Seonbukgu, Seoul 136-713 (Korea, Republic of); Jang, Gilsoo, E-mail: gjang@korea.ac.kr [School of Electrical Engineering, Korea University, Anam dong, Seonbukgu, Seoul 136-713 (Korea, Republic of)

    2014-09-15

    Highlights: • The power load concentrated in load centers results in high levels of fault current. • This paper introduces a fault current reduction scheme using SFCLs in substations. • The SFCL is connected in parallel to the bus tie between the two busbars. • The fault current mitigation using SFCLs is verified through PSS/e simulations. - Abstract: The concentration of large-scale power loads located in the metropolitan areas have resulted in high fault current levels during a fault thereby requiring the substation to operate in the double busbar configuration mode. However, the double busbar configuration mode results in deterioration of power system reliability and unbalanced power flow in the adjacent transmission lines which may result in issues such as overloading of lines. This paper proposes the implementation of the superconducting fault current limiter (SFCL) to be installed between the two substation busbars for a more efficient and flexible operation of the substation enabling both single and double busbar configurations depending on the system conditions for guaranteeing power system reliability as well as fault current limitations. Case studies are being performed for the effectiveness of the SFCL installation and results are compared for the cases where the substation is operating in single and double busbar mode and with and without the installation of the SFCL for fault current mitigation.

  15. The effect of gradational velocities and anisotropy on fault-zone trapped waves

    Science.gov (United States)

    Gulley, A. K.; Eccles, J. D.; Kaipio, J. P.; Malin, P. E.

    2017-08-01

    Synthetic fault-zone trapped wave (FZTW) dispersion curves and amplitude responses for FL (Love) and FR (Rayleigh) type phases are analysed in transversely isotropic 1-D elastic models. We explore the effects of velocity gradients, anisotropy, source location and mechanism. These experiments suggest: (i) A smooth exponentially decaying velocity model produces a significantly different dispersion curve to that of a three-layer model, with the main difference being that Airy phases are not produced. (ii) The FZTW dispersion and amplitude information of a waveguide with transverse-isotropy depends mostly on the Shear wave velocities in the direction parallel with the fault, particularly if the fault zone to country-rock velocity contrast is small. In this low velocity contrast situation, fully isotropic approximations to a transversely isotropic velocity model can be made. (iii) Fault-aligned fractures and/or bedding in the fault zone that cause transverse-isotropy enhance the amplitude and wave-train length of the FR type FZTW. (iv) Moving the source and/or receiver away from the fault zone removes the higher frequencies first, similar to attenuation. (v) In most physically realistic cases, the radial component of the FR type FZTW is significantly smaller in amplitude than the transverse.

  16. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  17. Local rollback for fault-tolerance in parallel computing systems

    Science.gov (United States)

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  18. RESEARCH ARTICLE Ne2 encodes protein(s) and the altered ...

    Indian Academy of Sciences (India)

    friendly method for increasing sustainable global food security. .... This qualitative difference suggests that Ne2 could encode one or two or three of ... things, common wheat must have at least three types of chloroplast-genomes (A, D, and B).

  19. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  20. Final Report: Migration Mechanisms for Large-scale Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Jason Nieh

    2009-10-30

    Process migration is the ability to transfer a process from one machine to another. It is a useful facility in distributed computing environments, especially as computing devices become more pervasive and Internet access becomes more ubiquitous. The potential benefits of process migration, among others, are fault resilience by migrating processes off of faulty hosts, data access locality by migrating processes closer to the data, better system response time by migrating processes closer to users, dynamic load balancing by migrating processes to less loaded hosts, and improved service availability and administration by migrating processes before host maintenance so that applications can continue to run with minimal downtime. Although process migration provides substantial potential benefits and many approaches have been considered, achieving transparent process migration functionality has been difficult in practice. To address this problem, our work has designed, implemented, and evaluated new and powerful transparent process checkpoint-restart and migration mechanisms for desktop, server, and parallel applications that operate across heterogeneous cluster and mobile computing environments. A key aspect of this work has been to introduce lightweight operating system virtualization to provide processes with private, virtual namespaces that decouple and isolate processes from dependencies on the host operating system instance. This decoupling enables processes to be transparently checkpointed and migrated without modifying, recompiling, or relinking applications or the operating system. Building on this lightweight operating system virtualization approach, we have developed novel technologies that enable (1) coordinated, consistent checkpoint-restart and migration of multiple processes, (2) fast checkpointing of process and file system state to enable restart of multiple parallel execution environments and time travel, (3) process migration across heterogeneous

  1. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  2. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  3. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Panda, Dhabaleswar Kumar [The Ohio State University; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system through fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of

  4. An Enhanced Erasure Code-Based Security Mechanism for Cloud Storage

    Directory of Open Access Journals (Sweden)

    Wenfeng Wang

    2014-01-01

    Full Text Available Cloud computing offers a wide range of luxuries, such as high performance, rapid elasticity, on-demand self-service, and low cost. However, data security continues to be a significant impediment in the promotion and popularization of cloud computing. To address the problem of data leakage caused by unreliable service providers and external cyber attacks, an enhanced erasure code-based security mechanism is proposed and elaborated in terms of four aspects: data encoding, data transmission, data placement, and data reconstruction, which ensure data security throughout the whole traversing into cloud storage. Based on the mechanism, we implement a secure cloud storage system (SCSS. The key design issues, including data division, construction of generator matrix, data encoding, fragment naming, and data decoding, are also described in detail. Finally, we conduct an analysis of data availability and security and performance evaluation. Experimental results and analysis demonstrate that SCSS achieves high availability, strong security, and excellent performance.

  5. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...

  6. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  7. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    Science.gov (United States)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  8. Towards a Game Theoretic View of Secure Computation

    DEFF Research Database (Denmark)

    Asharov, Gilad; Canetti, Ran; Hazay, Carmit

    2011-01-01

    We demonstrate how Game Theoretic concepts and formalism can be used to capture cryptographic notions of security. In the restricted but indicative case of two-party protocols in the face of malicious fail-stop faults, we first show how the traditional notions of secrecy and correctness of protoc......We demonstrate how Game Theoretic concepts and formalism can be used to capture cryptographic notions of security. In the restricted but indicative case of two-party protocols in the face of malicious fail-stop faults, we first show how the traditional notions of secrecy and correctness...... of protocols can be captured as properties of Nash equilibria in games for rational players. Next, we concentrate on fairness. Here we demonstrate a Game Theoretic notion and two different cryptographic notions that turn out to all be equivalent. In addition, we provide a simulation based notion that implies...

  9. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems.

    Science.gov (United States)

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.

  10. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Arup Ghosh

    2016-01-01

    Full Text Available Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.

  11. A 3D resistivity model derived from the transient electromagnetic data observed on the Araba fault, Jordan

    Science.gov (United States)

    Rödder, A.; Tezkan, B.

    2013-01-01

    72 inloop transient electromagnetic soundings were carried out on two 2 km long profiles perpendicular and two 1 km and two 500 m long profiles parallel to the strike direction of the Araba fault in Jordan which is the southern part of the Dead Sea transform fault indicating the boundary between the African and Arabian continental plates. The distance between the stations was on average 50 m. The late time apparent resistivities derived from the induced voltages show clear differences between the stations located at the eastern and at the western part of the Araba fault. The fault appears as a boundary between the resistive western (ca. 100 Ωm) and the conductive eastern part (ca. 10 Ωm) of the survey area. On profiles parallel to the strike late time apparent resistivities were almost constant as well in the time dependence as in lateral extension at different stations, indicating a 2D resistivity structure of the investigated area. After having been processed, the data were interpreted by conventional 1D Occam and Marquardt inversion. The study using 2D synthetic model data showed, however, that 1D inversions of stations close to the fault resulted in fictitious layers in the subsurface thus producing large interpretation errors. Therefore, the data were interpreted by a 2D forward resistivity modeling which was then extended to a 3D resistivity model. This 3D model explains satisfactorily the time dependences of the observed transients at nearly all stations.

  12. Modular techniques for dynamic fault-tree analysis

    Science.gov (United States)

    Patterson-Hine, F. A.; Dugan, Joanne B.

    1992-01-01

    It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.

  13. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  14. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  15. Weak fault detection and health degradation monitoring using customized standard multiwavelets

    Science.gov (United States)

    Yuan, Jing; Wang, Yu; Peng, Yizhen; Wei, Chenjun

    2017-09-01

    Due to the nonobvious symptoms contaminated by a large amount of background noise, it is challenging to beforehand detect and predictively monitor the weak faults for machinery security assurance. Multiwavelets can act as adaptive non-stationary signal processing tools, potentially viable for weak fault diagnosis. However, the signal-based multiwavelets suffer from such problems as the imperfect properties missing the crucial orthogonality, the decomposition distortion impossibly reflecting the relationships between the faults and signatures, the single objective optimization and independence for fault prognostic. Thus, customized standard multiwavelets are proposed for weak fault detection and health degradation monitoring, especially the weak fault signature quantitative identification. First, the flexible standard multiwavelets are designed using the construction method derived from scalar wavelets, seizing the desired properties for accurate detection of weak faults and avoiding the distortion issue for feature quantitative identification. Second, the multi-objective optimization combined three dimensionless indicators of the normalized energy entropy, normalized singular entropy and kurtosis index is introduced to the evaluation criterions, and benefits for selecting the potential best basis functions for weak faults without the influence of the variable working condition. Third, an ensemble health indicator fused by the kurtosis index, impulse index and clearance index of the original signal along with the normalized energy entropy and normalized singular entropy by the customized standard multiwavelets is achieved using Mahalanobis distance to continuously monitor the health condition and track the performance degradation. Finally, three experimental case studies are implemented to demonstrate the feasibility and effectiveness of the proposed method. The results show that the proposed method can quantitatively identify the fault signature of a slight rub on

  16. Static stress changes associated with normal faulting earthquakes in South Balkan area

    Science.gov (United States)

    Papadimitriou, E.; Karakostas, V.; Tranos, M.; Ranguelov, B.; Gospodinov, D.

    2007-10-01

    Activation of major faults in Bulgaria and northern Greece presents significant seismic hazard because of their proximity to populated centers. The long recurrence intervals, of the order of several hundred years as suggested by previous investigations, imply that the twentieth century activation along the southern boundary of the sub-Balkan graben system, is probably associated with stress transfer among neighbouring faults or fault segments. Fault interaction is investigated through elastic stress transfer among strong main shocks ( M ≥ 6.0), and in three cases their foreshocks, which ruptured distinct or adjacent normal fault segments. We compute stress perturbations caused by earthquake dislocations in a homogeneous half-space. The stress change calculations were performed for faults of strike, dip, and rake appropriate to the strong events. We explore the interaction between normal faults in the study area by resolving changes of Coulomb failure function ( ΔCFF) since 1904 and hence the evolution of the stress field in the area during the last 100 years. Coulomb stress changes were calculated assuming that earthquakes can be modeled as static dislocations in an elastic half-space, and taking into account both the coseismic slip in strong earthquakes and the slow tectonic stress buildup associated with major fault segments. We evaluate if these stress changes brought a given strong earthquake closer to, or sent it farther from, failure. Our modeling results show that the generation of each strong event enhanced the Coulomb stress on along-strike neighbors and reduced the stress on parallel normal faults. We extend the stress calculations up to present and provide an assessment for future seismic hazard by identifying possible sites of impending strong earthquakes.

  17. The 2016-2017 central Italy coseismic surface ruptures and their meaning with respect to foreseen active fault systems segmentation

    Science.gov (United States)

    De Martini, P. M.; Pucci, S.; Villani, F.; Civico, R.; Del Rio, L.; Cinti, F. R.; Pantosti, D.

    2017-12-01

    In 2016-2017 a series of moderate to large normal faulting earthquakes struck central Italy producing severe damage in many towns including Amatrice, Norcia and Visso and resulting in 299 casualties and >20,000 homeless. The complex seismic sequence depicts a multiple activation of the Mt. Vettore-Mt. Bove (VBFS) and the Laga Mts. fault systems, which were considered in literature as independent segments characterizing a recent seismic gap in the region comprised between two modern seismic sequences: the 1997-1998 Colfiorito and the 2009 L'Aquila. We mapped in detail the coseismic surface ruptures following three mainshocks (Mw 6.0 on 24th August, Mw 5.9 and Mw 6.5 on 26th and 30th October, 2016, respectively). Primary surface ruptures were observed and recorded for a total length of 5.2 km, ≅10 km and ≅25 km, respectively, along closely-spaced, parallel or subparallel, overlapping or step-like synthetic and antithetic fault splays of the activated fault systems, in some cases rupturing repeatedly the same location. Some coseismic ruptures were mapped also along the Norcia Fault System, paralleling the VBFS about 10 km westward. We recorded geometric and kinematic characteristics of the normal faulting ruptures with an unprecedented detail thanks to almost 11,000 oblique photographs taken from helicopter flights soon after the mainshocks, verified and integrated with field data (more than 7000 measurements). We analyze the along-strike coseismic slip and slip vectors distribution to be observed in the context of the geomorphic expression of the disrupted slopes and their depositional and erosive processes. Moreover, we constructed 1:10.000 scale geologic cross-sections based on updated maps, and we reconstructed the net offset distribution of the activated fault system to be compared with the morphologic throws and to test a cause-effect relationship between faulting and first-order landforms. We provide a reconstruction of the 2016 coseismic rupture pattern as

  18. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Science.gov (United States)

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  19. Security analysis of optical encryption

    OpenAIRE

    Frauel, Yann; Castro, Albertina; Naughton, Thomas J.; Javidi, Bahram

    2005-01-01

    This paper analyzes the security of amplitude encoding for double random phase encryption. We describe several types of attack. The system is found to be resistant to brute-force attacks but vulnerable to chosen and known plaintext attacks.

  20. Security analysis of optical encryption

    Science.gov (United States)

    Frauel, Yann; Castro, Albertina; Naughton, Thomas J.; Javidi, Bahram

    2005-10-01

    This paper analyzes the security of amplitude encoding for double random phase encryption. We describe several types of attack. The system is found to be resistant to brute-force attacks but vulnerable to chosen and known plaintext attacks.

  1. Fault tolerant system based on IDDQ testing

    Science.gov (United States)

    Guibane, Badi; Hamdi, Belgacem; Mtibaa, Abdellatif; Bensalem, Brahim

    2018-06-01

    Offline test is essential to ensure good manufacturing quality. However, for permanent or transient faults that occur during the use of the integrated circuit in an application, an online integrated test is needed as well. This procedure should ensure the detection and possibly the correction or the masking of these faults. This requirement of self-correction is sometimes necessary, especially in critical applications that require high security such as automotive, space or biomedical applications. We propose a fault-tolerant design for analogue and mixed-signal design complementary metal oxide (CMOS) circuits based on the quiescent current supply (IDDQ) testing. A defect can cause an increase in current consumption. IDDQ testing technique is based on the measurement of power supply current to distinguish between functional and failed circuits. The technique has been an effective testing method for detecting physical defects such as gate-oxide shorts, floating gates (open) and bridging defects in CMOS integrated circuits. An architecture called BICS (Built In Current Sensor) is used for monitoring the supply current (IDDQ) of the connected integrated circuit. If the measured current is not within the normal range, a defect is signalled and the system switches connection from the defective to a functional integrated circuit. The fault-tolerant technique is composed essentially by a double mirror built-in current sensor, allowing the detection of abnormal current consumption and blocks allowing the connection to redundant circuits, if a defect occurs. Spices simulations are performed to valid the proposed design.

  2. Unified compression and encryption algorithm for fast and secure network communications

    International Nuclear Information System (INIS)

    Rizvi, S.M.J.; Hussain, M.; Qaiser, N.

    2005-01-01

    Compression and encryption of data are two vital requirements for the fast and secure transmission of data in the network based communications. In this paper an algorithm is presented based on adaptive Huffman encoding for unified compression and encryption of Unicode encoded textual data. The Huffman encoding weakness that same tree is needed for decoding is utilized in the algorithm presented as an extra layer of security, which is updated whenever the frequency change is above the specified threshold level. The results show that we get compression comparable to popular zip format and in addition to that data has got an additional layer of encryption that makes it more secure. Thus unified algorithm presented here can be used for network communications between different branches of banks, e- Government programs and national database and registration centers where data transmission requires both compression and encryption. (author)

  3. Applying a Cerebellar Model Articulation Controller Neural Network to a Photovoltaic Power Generation System Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Kuei-Hsiang Chao

    2013-01-01

    Full Text Available This study employed a cerebellar model articulation controller (CMAC neural network to conduct fault diagnoses on photovoltaic power generation systems. We composed a module array using 9 series and 2 parallel connections of SHARP NT-R5E3E 175 W photovoltaic modules. In addition, we used data that were outputted under various fault conditions as the training samples for the CMAC and used this model to conduct the module array fault diagnosis after completing the training. The results of the training process and simulations indicate that the method proposed in this study requires fewer number of training times compared to other methods. In addition to significantly increasing the accuracy rate of the fault diagnosis, this model features a short training duration because the training process only tunes the weights of the exited memory addresses. Therefore, the fault diagnosis is rapid, and the detection tolerance of the diagnosis system is enhanced.

  4. Kinematics of the 2015 San Ramon, California earthquake swarm: Implications for fault zone structure and driving mechanisms

    Science.gov (United States)

    Xue, Lian; Bürgmann, Roland; Shelly, David R.; Johnson, Christopher W.; Taira, Taka'aki

    2018-05-01

    Earthquake swarms represent a sudden increase in seismicity that may indicate a heterogeneous fault-zone, the involvement of crustal fluids and/or slow fault slip. Swarms sometimes precede major earthquake ruptures. An earthquake swarm occurred in October 2015 near San Ramon, California in an extensional right step-over region between the northern Calaveras Fault and the Concord-Mt. Diablo fault zone, which has hosted ten major swarms since 1970. The 2015 San Ramon swarm is examined here from 11 October through 18 November using template matching analysis. The relocated seismicity catalog contains ∼4000 events with magnitudes between - 0.2 parallel, southwest striking and northwest dipping fault segments of km-scale dimension and thickness of up to 200 m. The segments contain coexisting populations of different focal-mechanisms, suggesting a complex fault zone structure with several sets of en échelon fault orientations. The migration of events along the three planar structures indicates a complex fluid and faulting interaction processes. We searched for correlations between seismic activity and tidal stresses and found some suggestive features, but nothing that we can be confident is statistically significant.

  5. Tolerating correlated failures in Massively Parallel Stream Processing Engines

    DEFF Research Database (Denmark)

    Su, L.; Zhou, Y.

    2016-01-01

    Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint. On the o......Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint....... On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...

  6. A Secure Test Technique for Pipelined Advanced Encryption Standard

    Science.gov (United States)

    Shi, Youhua; Togawa, Nozomu; Yanagisawa, Masao; Ohtsuki, Tatsuo

    In this paper, we presented a Design-for-Secure-Test (DFST) technique for pipelined AES to guarantee both the security and the test quality during testing. Unlike previous works, the proposed method can keep all the secrets inside and provide high test quality and fault diagnosis ability as well. Furthermore, the proposed DFST technique can significantly reduce test application time, test data volume, and test generation effort as additional benefits.

  7. Geomorphic and Structural Evidence for Rolling Hinge Style Deformation in the Footwall of an Active Low Angle Normal Fault, Mai'iu Fault, Woodlark Rift, SE Papua New Guinea

    Science.gov (United States)

    Mizera, M.; Little, T.; Norton, K. P.; Webber, S.; Ellis, S. M.; Oesterle, J.

    2016-12-01

    While shown to operate in oceanic crust, rolling hinge style deformation remains a debated process in metamorpic core complexes (MCCs) in the continents. The model predicts that unloading and isostatic uplift during slip causes a progressive back-tilting in the upper crust of a normal fault that is more steeply dipping at depth. The Mai'iu Fault in the Woodlark Rift, SE Papua New Guinea, is one of the best-exposed and fastest slipping (probably >7 mm/yr) active low-angle normal faults (LANFs) on Earth. We analysed structural field data from this fault's exhumed slip surface and footwall, together with geomorphic data interpreted from aerial photographs and GeoSAR-derived digital elevation models (gridded at 5-30 m spacing), to evaluate deformational processes affecting the rapidly exhuming, domal-shaped detachment fault. The exhumed fault surface emerges from the ground at the rangefront near sea level with a northward dip of 21°. Up-dip, it is well-preserved, smooth and corrugated, with some fault remnants extending at least 29 km in the slip direction. The surface flattens over the crest of the dome, beyond where it dips S at up to 15°. Windgaps perched on the crestal main divide of the dome, indicate both up-dip tectonic advection and progressive back-tilting of the exhuming fault surface. We infer that slip on a serial array of m-to-km scale up-to-the-north, steeply S-dipping ( 75°) antithetic-sense normal faults accommodated some of the exhumation-related, inelastic bending of the footwall. These geomorphically well expressed faults strike parallel to the main Mai'iu fault at 110.9±5°, have a mean cross-strike spacing of 1520 m, and slip with a consistent up-to-the-north sense of throw ranging from <5 m to 120 m. Apparently the Mai'iu Fault was able to continue slipping despite having to negotiate this added fault-roughness. We interpret the antithetic faulting to result from bending stresses, and to provide the first clear examples of rolling hinge

  8. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  9. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest.

    Science.gov (United States)

    Ma, Suliang; Chen, Mingxuan; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-04-16

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods.

  10. Delineating active faults by using integrated geophysical data at northeastern part of Cairo, Egypt

    Directory of Open Access Journals (Sweden)

    Sultan Awad Sultan Araffa

    2012-06-01

    Full Text Available Geophysical techniques such as gravity, magnetic and seismology are perfect tools for detecting subsurface structures of local, regional as well as of global scales. The study of the earthquake records can be used for differentiating the active and non active fault elements. In the current study more than 2200 land magnetic stations have been measured by using two proton magnetometers. The data is corrected for diurnal variations and then reduced by IGRF. The corrected data have been interpreted by different techniques after filtering the data to separate shallow sources (basaltic sheet from the deep sources (basement complex. Both Euler deconvolution and 3-D magnetic modeling have been carried out. The results of our interpretation have indicated that the depth to the upper surface of basaltic sheet ranges from less than 10–600 m, depth to the lower surface ranges from 60 to 750 m while the thickness of the basaltic sheet varies from less than 10–450 m. Moreover, gravity measurements have been conducted at the 2200 stations using a CG-3 gravimeter. The measured values are corrected to construct a Bouguer anomaly map. The least squares technique is then applied for regional residual separation. The third order of least squares is found to be the most suitable to separate the residual anomalies from the regional one. The resultant third order residual gravity map is used to delineate the structural fault systems of different characteristic trends. The trends are a NW–SE trend parallel to that of Gulf of Suez, a NE–SW trend parallel to the Gulf of Aqaba and an E–W trend parallel to the trend of Mediterranean Sea. Taking seismological records into consideration, it is found that most of 24 earthquake events recorded in the study area are located on fault elements. This gives an indication that the delineated fault elements are active.

  11. One-step trinary signed-digit arithmetic using an efficient encoding scheme

    Science.gov (United States)

    Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.

    2000-11-01

    The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  12. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  13. Fault isolability conditions for linear systems with additive faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  14. Magnetometric and gravimetric surveys in fault detection over Acambay System

    Science.gov (United States)

    García-Serrano, A.; Sanchez-Gonzalez, J.; Cifuentes-Nava, G.

    2013-05-01

    In commemoration of the centennial of the Acambay intraplate earthquake of November 19th 1912, we carry out gravimetric and magnetometric surveys to define the structure of faults caused by this event. The study area is located approximately 11 km south of Acambay, in the Acambay-Tixmadeje fault system, where we performed two magnetometric surveys, the first consisting of 17 lines with a spacing of 35m between lines and 5m between stations, and the second with a total of 12 lines with the same spacing, both NW. In addition to these two lines we performed gravimetric profiles located in the central part of each magnetometric survey, with a spacing of 25m between stations, in order to correlate the results of both techniques, the lengths of such profiles were of 600m and 550m respectively. This work describes the data processing including directional derivatives, analytical signal and inversion, by means of which we obtain results of magnetic variations and anomaly traits highly correlated with those faults. It is of great importance to characterize these faults given the large population growth in the area and settlement houses on them, which involves a high risk in the security of the population, considering that these are active faults and cannot be discard earthquakes associated with them, so it is necessary for the authorities and people have relevant information to these problem.

  15. A Novel Audio Cryptosystem Using Chaotic Maps and DNA Encoding

    Directory of Open Access Journals (Sweden)

    S. J. Sheela

    2017-01-01

    Full Text Available Chaotic maps have good potential in security applications due to their inherent characteristics relevant to cryptography. This paper introduces a new audio cryptosystem based on chaotic maps, hybrid chaotic shift transform (HCST, and deoxyribonucleic acid (DNA encoding rules. The scheme uses chaotic maps such as two-dimensional modified Henon map (2D-MHM and standard map. The 2D-MHM which has sophisticated chaotic behavior for an extensive range of control parameters is used to perform HCST. DNA encoding technology is used as an auxiliary tool which enhances the security of the cryptosystem. The performance of the algorithm is evaluated for various speech signals using different encryption/decryption quality metrics. The simulation and comparison results show that the algorithm can achieve good encryption results and is able to resist several cryptographic attacks. The various types of analysis revealed that the algorithm is suitable for narrow band radio communication and real-time speech encryption applications.

  16. An optimized encoding method for secure key distribution by swapping quantum entanglement and its extension

    International Nuclear Information System (INIS)

    Gao Gan

    2015-01-01

    Song [Song D 2004 Phys. Rev. A 69 034301] first proposed two key distribution schemes with the symmetry feature. We find that, in the schemes, the private channels which Alice and Bob publicly announce the initial Bell state or the measurement result through are not needed in discovering keys, and Song’s encoding methods do not arrive at the optimization. Here, an optimized encoding method is given so that the efficiencies of Song’s schemes are improved by 7/3 times. Interestingly, this optimized encoding method can be extended to the key distribution scheme composed of generalized Bell states. (paper)

  17. Domain decomposition method for dynamic faulting under slip-dependent friction

    International Nuclear Information System (INIS)

    Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie

    2004-01-01

    The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)

  18. Fault diagnosis and fault-tolerant control based on adaptive control approach

    CERN Document Server

    Shen, Qikun; Shi, Peng

    2017-01-01

    This book provides recent theoretical developments in and practical applications of fault diagnosis and fault tolerant control for complex dynamical systems, including uncertain systems, linear and nonlinear systems. Combining adaptive control technique with other control methodologies, it investigates the problems of fault diagnosis and fault tolerant control for uncertain dynamic systems with or without time delay. As such, the book provides readers a solid understanding of fault diagnosis and fault tolerant control based on adaptive control technology. Given its depth and breadth, it is well suited for undergraduate and graduate courses on linear system theory, nonlinear system theory, fault diagnosis and fault tolerant control techniques. Further, it can be used as a reference source for academic research on fault diagnosis and fault tolerant control, and for postgraduates in the field of control theory and engineering. .

  19. A summary of the active fault investigation in the extension sea area of Kikugawa fault and the Nishiyama fault , N-S direction fault in south west Japan

    Science.gov (United States)

    Abe, S.

    2010-12-01

    In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we

  20. Structural Evolution of Transform Fault Zones in Thick Oceanic Crust of Iceland

    Science.gov (United States)

    Karson, J. A.; Brandsdottir, B.; Horst, A. J.; Farrell, J.

    2017-12-01

    Spreading centers in Iceland are offset from the regional trend of the Mid-Atlantic Ridge by the Tjörnes Fracture Zone (TFZ) in the north and the South Iceland Seismic Zone (SISZ) in the south. Rift propagation away from the center of the Iceland hotspot, has resulted in migration of these transform faults to the N and S, respectively. As they migrate, new transform faults develop in older crust between offset spreading centers. Active transform faults, and abandoned transform structures left in their wakes, show features that reflect different amounts (and durations) of slip that can be viewed as a series of snapshots of different stages of transform fault evolution in thick, oceanic crust. This crust has a highly anisotropic, spreading fabric with pervasive zones of weakness created by spreading-related normal faults, fissures and dike margins oriented parallel to the spreading centers where they formed. These structures have a strong influence on the mechanical properties of the crust. By integrating available data, we suggest a series of stages of transform development: 1) Formation of an oblique rift (or leaky transform) with magmatic centers, linked by bookshelf fault zones (antithetic strike-slip faults at a high angle to the spreading direction) (Grimsey Fault Zone, youngest part of the TFZ); 2) broad zone of conjugate faulting (tens of km) (Hreppar Block N of the SISZ); 3) narrower ( 20 km) zone of bookshelf faulting aligned with the spreading direction (SISZ); 4) mature, narrow ( 1 km) through-going transform fault zone bounded by deformation (bookshelf faulting and block rotations) distributed over 10 km to either side (Húsavík-Flatey Fault Zone in the TFZ). With progressive slip, the transform zone becomes progressively narrower and more closely aligned with the spreading direction. The transform and non-transform (beyond spreading centers) domains may be truncated by renewed propagation and separated by subsequent spreading. This perspective

  1. Three-dimensional vectorial multifocal arrays created by pseudo-period encoding

    Science.gov (United States)

    Zeng, Tingting; Chang, Chenliang; Chen, Zhaozhong; Wang, Hui-Tian; Ding, Jianping

    2018-06-01

    Multifocal arrays have been attracting considerable attention recently owing to their potential applications in parallel optical tweezers, parallel single-molecule orientation determination, parallel recording and multifocal multiphoton microscopy. However, the generation of vectorial multifocal arrays with a tailorable structure and polarization state remains a great challenge, and reports on multifocal arrays have hitherto been restricted either to scalar focal spots without polarization versatility or to regular arrays with fixed spacing. In this work, we propose a specific pseudo-period encoding technique to create three-dimensional (3D) vectorial multifocal arrays with the ability to manipulate the position, polarization state and intensity of each focal spot. We experimentally validated the flexibility of our approach in the generation of 3D vectorial multiple spots with polarization multiplicity and position tunability.

  2. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  3. Application of Anisotropy of Magnetic Susceptibility to large-scale fault kinematics: an evaluation

    Science.gov (United States)

    Casas, Antonio M.; Roman-Berdiel, Teresa; Marcén, Marcos; Oliva-Urcia, Belen; Soto, Ruth; Garcia-Lasanta, Cristina; Calvin, Pablo; Pocovi, Andres; Gil-Imaz, Andres; Pueyo-Anchuela, Oscar; Izquierdo-Llavall, Esther; Vernet, Eva; Santolaria, Pablo; Osacar, Cinta; Santanach, Pere; Corrado, Sveva; Invernizzi, Chiara; Aldega, Luca; Caricchi, Chiara; Villalain, Juan Jose

    2017-04-01

    Major discontinuities in the Earth's crust are expressed by faults that often cut across its whole thickness favoring, for example, the emplacement of magmas of mantelic origin. These long-lived faults are common in intra-plate environments and show multi-episodic activity that spans for hundred of million years and constitute first-order controls on plate evolution, favoring basin formation and inversion, rotations and the accommodation of deformation in large segments of plates. Since the post-Paleozoic evolution of these large-scale faults has taken place (and can only be observed) at shallow crustal levels, the accurate determination of fault kinematics is hampered by scarcely developed fault rocks, lack of classical structural indicators and the brittle deformation accompanying fault zones. These drawbacks are also found when thick clayey or evaporite levels, with or without diapiric movements, are the main detachment levels that facilitate large displacements in the upper crust. Anisotropy of Magnetic Susceptibility (AMS) provides a useful tool for the analysis of fault zones lacking fully developed kinematic indicators. However, its meaning in terms of deformational fabrics must be carefully checked by means of outcrop and thin section analysis in order to establish the relationship between the orientation of magnetic ellipsoid axes and the transport directions, as well as the representativity of scalar parameters regarding deformation mechanisms. Timing of faulting, P-T conditions and magnetic mineralogy are also major constraints for the interpretation of magnetic fabrics and therefore, separating ferro- and para-magnetic fabric components may be necessary in complex cases. AMS results indicate that the magnetic lineation can be parallel (when projected onto the shear plane) or perpendicular (i.e. parallel to the intersection lineation) to the transport direction depending mainly on the degree of shear deformation. Changes between the two end-members can

  4. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    Science.gov (United States)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking

  5. Width and dip of the southern San Andreas Fault at Salt Creek from modeling of geophysical data

    Science.gov (United States)

    Langenheim, Victoria; Athens, Noah D.; Scheirer, Daniel S.; Fuis, Gary S.; Rymer, Michael J.; Goldman, Mark R.; Reynolds, Robert E.

    2014-01-01

    We investigate the geometry and width of the southernmost stretch of the San Andreas Fault zone using new gravity and magnetic data along line 7 of the Salton Seismic Imaging Project. In the Salt Creek area of Durmid Hill, the San Andreas Fault coincides with a complex magnetic signature, with high-amplitude, short-wavelength magnetic anomalies superposed on a broader magnetic anomaly that is at least 5 km wide centered 2–3 km northeast of the fault. Marine magnetic data show that high-frequency magnetic anomalies extend more than 1 km west of the mapped trace of the San Andreas Fault. Modeling of magnetic data is consistent with a moderate to steep (> 50 degrees) northeast dip of the San Andreas Fault, but also suggests that the sedimentary sequence is folded west of the fault, causing the short wavelength of the anomalies west of the fault. Gravity anomalies are consistent with the previously modeled seismic velocity structure across the San Andreas Fault. Modeling of gravity data indicates a steep dip for the San Andreas Fault, but does not resolve unequivocally the direction of dip. Gravity data define a deeper basin, bounded by the Powerline and Hot Springs Faults, than imaged by the seismic experiment. This basin extends southeast of Line 7 for nearly 20 km, with linear margins parallel to the San Andreas Fault. These data suggest that the San Andreas Fault zone is wider than indicated by its mapped surface trace.

  6. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  7. Protection of data carriers using secure optical codes

    Science.gov (United States)

    Peters, John A.; Schilling, Andreas; Staub, René; Tompkin, Wayne R.

    2006-02-01

    Smartcard technologies, combined with biometric-enabled access control systems, are required for many high-security government ID card programs. However, recent field trials with some of the most secure biometric systems have indicated that smartcards are still vulnerable to well equipped and highly motivated counterfeiters. In this paper, we present the Kinegram Secure Memory Technology which not only provides a first-level visual verification procedure, but also reinforces the existing chip-based security measures. This security concept involves the use of securely-coded data (stored in an optically variable device) which communicates with the encoded hashed information stored in the chip memory via a smartcard reader device.

  8. Unpacking the cognitive map: the parallel map theory of hippocampal function.

    Science.gov (United States)

    Jacobs, Lucia F; Schenk, Françoise

    2003-04-01

    In the parallel map theory, the hippocampus encodes space with 2 mapping systems. The bearing map is constructed primarily in the dentate gyrus from directional cues such as stimulus gradients. The sketch map is constructed within the hippocampus proper from positional cues. The integrated map emerges when data from the bearing and sketch maps are combined. Because the component maps work in parallel, the impairment of one can reveal residual learning by the other. Such parallel function may explain paradoxes of spatial learning, such as learning after partial hippocampal lesions, taxonomic and sex differences in spatial learning, and the function of hippocampal neurogenesis. By integrating evidence from physiology to phylogeny, the parallel map theory offers a unified explanation for hippocampal function.

  9. The Cottage Grove fault system (Illinois Basin): Late Paleozoic transpression along a Precambrian crustal boundary

    Science.gov (United States)

    Duchek, A.B.; McBride, J.H.; Nelson, W.J.; Leetaru, H.E.

    2004-01-01

    The Cottage Grove fault system in southern Illinois has long been interpreted as an intracratonic dextral strike-slip fault system. We investigated its structural geometry and kinematics in detail using (1) outcrop data, (2) extensive exposures in underground coal mines, (3) abundant borehole data, and (4) a network of industry seismic reflection profiles, including data reprocessed by us. Structural contour mapping delineates distinct monoclines, broad anticlines, and synclines that express Paleozoic-age deformation associated with strike slip along the fault system. As shown on seismic reflection profiles, prominent near-vertical faults that cut the entire Paleozoic section and basement-cover contact branch upward into outward-splaying, high-angle reverse faults. The master fault, sinuous along strike, is characterized along its length by an elongate anticline, ???3 km wide, that parallels the southern side of the master fault. These features signify that the overall kinematic regime was transpressional. Due to the absence of suitable piercing points, the amount of slip cannot be measured, but is constrained at less than 300 m near the ground surface. The Cottage Grove fault system apparently follows a Precambrian terrane boundary, as suggested by magnetic intensity data, the distribution of ultramafic igneous intrusions, and patterns of earthquake activity. The fault system was primarily active during the Alleghanian orogeny of Late Pennsylvanian and Early Permian time, when ultramatic igneous magma intruded along en echelon tensional fractures. ?? 2004 Geological Society of America.

  10. The transtensional offshore portion of the northern San Andreas fault: Fault zone geometry, late Pleistocene to Holocene sediment deposition, shallow deformation patterns, and asymmetric basin growth

    Science.gov (United States)

    Beeson, Jeffrey W.; Johnson, Samuel Y.; Goldfinger, Chris

    2017-01-01

    We mapped an ~120 km offshore portion of the northern San Andreas fault (SAF) between Point Arena and Point Delgada using closely spaced seismic reflection profiles (1605 km), high-resolution multibeam bathymetry (~1600 km2), and marine magnetic data. This new data set documents SAF location and continuity, associated tectonic geomorphology, shallow stratigraphy, and deformation. Variable deformation patterns in the generally narrow (∼1 km wide) fault zone are largely associated with fault trend and with transtensional and transpressional fault bends.We divide this unique transtensional portion of the offshore SAF into six sections along and adjacent to the SAF based on fault trend, deformation styles, seismic stratigraphy, and seafloor bathymetry. In the southern region of the study area, the SAF includes a 10-km-long zone characterized by two active parallel fault strands. Slip transfer and long-term straightening of the fault trace in this zone are likely leading to transfer of a slice of the Pacific plate to the North American plate. The SAF in the northern region of the survey area passes through two sharp fault bends (∼9°, right stepping, and ∼8°, left stepping), resulting in both an asymmetric lazy Z–shape sedimentary basin (Noyo basin) and an uplifted rocky shoal (Tolo Bank). Seismic stratigraphic sequences and unconformities within the Noyo basin correlate with the previous 4 major Quaternary sea-level lowstands and record basin tilting of ∼0.6°/100 k.y. Migration of the basin depocenter indicates a lateral slip rate on the SAF of 10–19 mm/yr for the past 350 k.y.Data collected west of the SAF on the south flank of Cape Mendocino are inconsistent with the presence of an offshore fault strand that connects the SAF with the Mendocino Triple Junction. Instead, we suggest that the SAF previously mapped onshore at Point Delgada continues onshore northward and transitions to the King Range thrust.

  11. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

  12. Fault-weighted quantification method of fault detection coverage through fault mode and effect analysis in digital I&C systems

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jaehyun; Lee, Seung Jun, E-mail: sjlee420@unist.ac.kr; Jung, Wondea

    2017-05-15

    Highlights: • We developed the fault-weighted quantification method of fault detection coverage. • The method has been applied to specific digital reactor protection system. • The unavailability of the module had 20-times difference with the traditional method. • Several experimental tests will be effectively prioritized using this method. - Abstract: The one of the most outstanding features of a digital I&C system is the use of a fault-tolerant technique. With an awareness regarding the importance of thequantification of fault detection coverage of fault-tolerant techniques, several researches related to the fault injection method were developed and employed to quantify a fault detection coverage. In the fault injection method, each injected fault has a different importance because the frequency of realization of every injected fault is different. However, there have been no previous studies addressing the importance and weighting factor of each injected fault. In this work, a new method for allocating the weighting to each injected fault using the failure mode and effect analysis data was proposed. For application, the fault-weighted quantification method has also been applied to specific digital reactor protection system to quantify the fault detection coverage. One of the major findings in an application was that we may estimate the unavailability of the specific module in digital I&C systems about 20-times smaller than real value when we use a traditional method. The other finding was that we can also classify the importance of the experimental case. Therefore, this method is expected to not only suggest an accurate quantification procedure of fault-detection coverage by weighting the injected faults, but to also contribute to an effective fault injection experiment by sorting the importance of the failure categories.

  13. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  14. Transparency in stereopsis: parallel encoding of overlapping depth planes.

    Science.gov (United States)

    Reeves, Adam; Lynch, David

    2017-08-01

    We report that after extensive training, expert adults can accurately report the number, up to six, of transparent overlapping depth planes portrayed by brief (400 ms or 200 ms) random-element stereoscopic displays, and can well discriminate six from seven planes. Naïve subjects did poorly above three planes. Displays contained seven rows of 12 randomly located ×'s or +'s; jittering the disparities and number in each row to remove spurious cues had little effect on accuracy. Removing the central 3° of the 10° display to eliminate foveal vision hardly reduced the number of reportable planes. Experts could report how many of six planes contained +'s when the remainder contained ×'s, and most learned to report up to six planes in reverse contrast (left eye white +'s; right eye black +'s). Long-term training allowed some experts to reach eight depth planes. Results suggest that adult stereoscopic vision can learn to distinguish the outputs of six or more statistically independent, contrast-insensitive, narrowly tuned, asymmetric disparity channels in parallel.

  15. Architecture of a low-angle normal fault zone, southern Basin and Range (SE California)

    Science.gov (United States)

    Goyette, J. A.; John, B. E.; Campbell-Stone, E.; Stunitz, H.; Heilbronner, R.; Pec, M.

    2009-12-01

    Exposures of the denuded Cenozoic detachment fault system in the southern Sacramento Mountains (SE California) delimit the architecture of a regional low-angle normal fault, and highlight the evolution of these enigmatic faults. The fault was initiated ~23 Ma in quartzo-feldspathic basement gneiss and granitoids at a low-angle (2km, and amplitudes up to 100m. These corrugations are continuous along their hinges for up to 3.6 km. Damage zone fracture intensity varies both laterally, and perpendicular to the fault plane (over an area of 25km2), decreasing with depth in the footwall, and varies as a function of lithology and proximity to corrugation walls. Deformation is concentrated into narrow damage zones (100m) are found in areas where low-fracture intensity horses are corralled by sub-horizontal zones of cataclasite (up to 8m) and thick zones of epidote (up to 20cm) and silica-rich alteration (up to 1m). Sub-vertical shear and extension fractures, and sub-horizontal shear fractures/zones dominate the NE side of the core complex. In all cases, sub-vertical fractures verge into or are truncated by low-angle fractures that dominate the top of the damage zone. These low-angle fractures have an antithetic dip to the detachment fault plane. Some sub-vertical fractures become curviplanar close to the fault, where they are folded into parallelism with the sub-horizontal fault surface in the direction of transport. These field data, corroborated by ongoing microstructural analyses, indicate fault activity at a low angle accommodated by a variety of deformation mechanisms dependent on lithology, timing, fluid flow, and fault morphology.

  16. Characterization of individual stacking faults in a wurtzite GaAs nanowire by nanobeam X-ray diffraction.

    Science.gov (United States)

    Davtyan, Arman; Lehmann, Sebastian; Kriegner, Dominik; Zamani, Reza R; Dick, Kimberly A; Bahrami, Danial; Al-Hassan, Ali; Leake, Steven J; Pietsch, Ullrich; Holý, Václav

    2017-09-01

    Coherent X-ray diffraction was used to measure the type, quantity and the relative distances between stacking faults along the growth direction of two individual wurtzite GaAs nanowires grown by metalorganic vapour epitaxy. The presented approach is based on the general property of the Patterson function, which is the autocorrelation of the electron density as well as the Fourier transformation of the diffracted intensity distribution of an object. Partial Patterson functions were extracted from the diffracted intensity measured along the [000\\bar{1}] direction in the vicinity of the wurtzite 00\\bar{1}\\bar{5} Bragg peak. The maxima of the Patterson function encode both the distances between the fault planes and the type of the fault planes with the sensitivity of a single atomic bilayer. The positions of the fault planes are deduced from the positions and shapes of the maxima of the Patterson function and they are in excellent agreement with the positions found with transmission electron microscopy of the same nanowire.

  17. Layered Fault Rocks Below the West Salton Detachment Fault (WSDF), CA Record Multiple Seismogenic? Slip Events and Transfer of Material to a Fault Core

    Science.gov (United States)

    Axen, G. J.; Luther, A. L.; Selverstone, J.; Mozley, P.

    2011-12-01

    Unique layered cataclasites (LCs) occur locally along footwall splays, S of the ~N-dipping, top-E WSDF. They are well exposed in a NW-plunging antiform that folds the LCs and their upper and lower bounding faults. Layers range from very fine-grained granular shear zones 1-2 mm thick and cm's to m's long, to medium- to coarse-grained isotropic granular cataclasite with floating clasts up to 4-5 cm diameter in layers up to ~30 cm thick and 3 to >10 m long. The top, N-flank contact is ~5 m structurally below the main WSDF. Maximum thickness of the LCs is ~5 m on the S flank of the antiform, where the upper 10-50 cm of LCs are composed of relatively planar layers that are subparallel to the upper fault, which locally displays ultracataclasite. Deeper layers are folded into open to isoclinal folds and are faulted. Most shear-sense indicators show N-side-to-E or -SE slip, and include: (1) aligned biotite flakes and mm-scale shear bands that locally define a weak foliation dipping ~ESE, (2) sharp to granular shears, many of which merge up or down into fine-grained layers and, in the base of the overlying granodiorite, (3) primary reidel shears and (4) folded pegmatite dikes. Biotite is unaltered and feldspars are weakly to strongly altered to clays and zeolites. Zeolites also grew in pores between clasts. XRF analyses suggest minimal chemical alteration. The upper fault is sharp and relatively planar, carries granular to foliated cataclasitic granodiorite that grades up over ~2-4 m into punky, microcracked but plutonic-textured rock with much of the feldspar alteration seen in LC clasts. Some upper-plate reidels bend into parallelism with the top fault and bound newly formed LC layers. The basal fault truncates contorted layers and lacks evidence of layers being added there. We infer that the deeper, contorted layers are older and that the LC package grew upward by transfer of cataclasized slices from the overlying granodiorite while folding was ongoing. Particle

  18. Precise Relative Location of San Andreas Fault Tremors Near Cholame, CA, Using Seismometer Clusters: Slip on the Deep Extension of the Fault?

    Science.gov (United States)

    Shelly, D. R.; Ellsworth, W. L.; Ryberg, T.; Haberland, C.; Fuis, G.; Murphy, J.; Nadeau, R.; Bürgmann, R.

    2008-12-01

    Non-volcanic tremor, similar in character to that generated at some subduction zones, was recently identified beneath the strike-slip San Andreas Fault (SAF) in central California (Nadeau and Dolenc, 2005). Using a matched filter method, we closely examine a 24-hour period of active SAF tremor and show that, like tremor in the Nankai Trough subduction zone, this tremor is composed of repeated similar events. We take advantage of this similarity to locate detected similar events relative to several chosen events. While low signal-to-noise makes location challenging, we compensate for this by estimating event-pair differential times at 'clusters' of nearby temporary and permanent stations rather than at single stations. We find that the relative locations consistently form a near-linear structure in map view, striking parallel to the surface trace of the SAF. Therefore, we suggest that at least a portion of the tremor occurs on the deep extension of the fault, similar to the situation for subduction zone tremor. Also notable is the small depth range (a few hundred meters or less) of many of the located tremors, a feature possibly analogous to earthquake streaks observed on the shallower portion of the fault. The close alignment of the tremor with the SAF slip orientation suggests a shear slip mechanism, as has been argued for subduction tremor. At times, we observe a clear migration of the tremor source along the fault, at rates of 15-40 km/hr.

  19. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  20. Parallel efficient rate control methods for JPEG 2000

    Science.gov (United States)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  1. Parallel protein secondary structure prediction based on neural networks.

    Science.gov (United States)

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  2. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  3. Optimizing SIEM Throughput on the Cloud Using Parallelization.

    Science.gov (United States)

    Alam, Masoom; Ihsan, Asif; Khan, Muazzam A; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, Muhammad Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.

  4. Optimizing SIEM Throughput on the Cloud Using Parallelization.

    Directory of Open Access Journals (Sweden)

    Masoom Alam

    Full Text Available Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP, mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS. It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.

  5. Glacially induced faulting along the NW segment of the Sorgenfrei-Tornquist Zone, northern Denmark: Implications for neotectonics and Lateglacial fault-bound basin formation

    Science.gov (United States)

    Brandes, Christian; Steffen, Holger; Sandersen, Peter B. E.; Wu, Patrick; Winsemann, Jutta

    2018-06-01

    The Sorgenfrei-Tornquist Zone (STZ) is the northwestern segment of the Tornquist Zone and extends from Bornholm across the Baltic Sea and northern Denmark into the North Sea. It represents a major lithospheric structure with a significant increase in lithosphere thickness from south to north. A series of meter-scale normal faults and soft-sediment deformation structures (SSDS) are developed in Lateglacial marine and lacustrine sediments, which are exposed along the Lønstrup Klint cliff at the North Sea coast of northern Denmark. These deformed deposits occur in the local Nørre Lyngby basin that forms part of the STZ. Most of the SSDS are postdepositional, implying major tectonic activity between the Allerød and Younger Dryas (∼14 ka to 12 ka). The occurrence of some syn- and metadepositional SSDS point to an onset of tectonic activity at around 14.5 ka. The formation of normal faults is probably the effect of neotectonic movements along the Børglum fault, which represents the northern boundary fault of the STZ in the study area. The narrow and elongated Nørre Lyngby basin can be interpreted as a strike-slip basin that developed due to right-lateral movements at the Børglum fault. As indicated by the SSDS, these movements were most likely accompanied by earthquake(s). Based on the association of SSDS these earthquake(s) had magnitudes of at least Ms ≥ 4.2 or even up to magnitude ∼ 7 as indicated by a fault with 3 m displacement. The outcrop data are supported by a topographic analysis of the terrain that points to a strong impact from the fault activity on the topography, characterized by a highly regular erosional pattern, the evolution of fault-parallel sag ponds and a potential fault scarp with a height of 1-2 m. With finite-element simulations, we test the impact of Late Pleistocene (Weichselian) glaciation-induced Coulomb stress change on the reactivation potential of the Børglum fault. The numerical simulations of deglaciation-related lithospheric

  6. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan

    2017-05-31

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  7. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan; Hanafy, Sherif; Guo, Bowen; Kosmicki, Maximillian Sunflower

    2017-01-01

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  8. CRISP. Fault detection, analysis and diagnostics in high-DG distribution systems

    International Nuclear Information System (INIS)

    Fontela, M.; Bacha, S.; Hadsjaid, N.; Andrieu, C.; Raison, B.; Penkov, D.

    2004-04-01

    The fault in the electrotechnical meaning is defined in the document. The main part of faults in overhead lines are non permanent faults, what entails the network operator to maintain the existing techniques to clear as fast as possible these faults. When a permanent fault occurs the operator has to detect and to limit the risks as soon as possible. Different axes are followed: limitation of the fault current, clearing the faulted feeder, locating the fault by test and try under possible fault condition. So the fault detection, fault clearing and fault localization are important functions of an EPS (electric power systems) to allow secure and safe operation of the system. The function may be improved by means of a better use of ICT components in the future sharing conveniently the intelligence needed near the distributed devices and a defined centralized intelligence. This improvement becomes necessary in distribution EPS with a high introduction of DR (distributed resources). The transmission and sub-transmission protection systems are already installed in order to manage power flow in all directions, so the DR issue is less critical for this part of the power system in term of fault clearing and diagnosis. Nevertheless the massive introduction of RES involves another constraints to the transmission system which are the bottlenecks caused by important local and fast installed production as wind power plants. Dealing with the distribution power system, and facing a permanent fault, two main actions must be achieved: identify the faulted elementary EPS area quickly and allow the field crew to locate and to repair the fault as soon as possible. The introduction of DR in distribution EPS involves some changes in fault location methods or equipment. The different existing neutral grounding systems make it difficult the achievement of a general method relevant for any distribution EPS in Europe. Some solutions are studied in the CRISP project in order to improve the

  9. Fault Activity Aware Service Delivery in Wireless Sensor Networks for Smart Cities

    Directory of Open Access Journals (Sweden)

    Xiaomei Zhang

    2017-01-01

    Full Text Available Wireless sensor networks (WSNs are increasingly used in smart cities which involve multiple city services having quality of service (QoS requirements. When misbehaving devices exist, the performance of current delivery protocols degrades significantly. Nonetheless, the majority of existing schemes either ignore the faulty behaviors’ variability and time-variance in city environments or focus on homogeneous traffic for traditional data services (simple text messages rather than city services (health care units, traffic monitors, and video surveillance. We consider the problem of fault-aware multiservice delivery, in which the network performs secure routing and rate control in terms of fault activity dynamic metric. To this end, we first design a distributed framework to estimate the fault activity information based on the effects of nondeterministic faulty behaviors and to incorporate these estimates into the service delivery. Then we present a fault activity geographic opportunistic routing (FAGOR algorithm addressing a wide range of misbehaviors. We develop a leaky-hop model and design a fault activity rate-control algorithm for heterogeneous traffic to allocate resources, while guaranteeing utility fairness among multiple city services. Finally, we demonstrate the significant performance of our scheme in routing performance, effective utility, and utility fairness in the presence of misbehaving sensors through extensive simulations.

  10. Secure quantum private information retrieval using phase-encoded queries

    Energy Technology Data Exchange (ETDEWEB)

    Olejnik, Lukasz [CERN, 1211 Geneva 23, Switzerland and Poznan Supercomputing and Networking Center, Noskowskiego 12/14, PL-61-704 Poznan (Poland)

    2011-08-15

    We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.

  11. Secure quantum private information retrieval using phase-encoded queries

    International Nuclear Information System (INIS)

    Olejnik, Lukasz

    2011-01-01

    We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.

  12. Architecture of thrust faults with alongstrike variations in fault-plane dip: anatomy of the Lusatian Fault, Bohemian Massif

    Czech Academy of Sciences Publication Activity Database

    Coubal, Miroslav; Adamovič, Jiří; Málek, Jiří; Prouza, V.

    2014-01-01

    Roč. 59, č. 3 (2014), s. 183-208 ISSN 1802-6222 Institutional support: RVO:67985831 ; RVO:67985891 Keywords : fault architecture * fault plane geometry * drag structures * thrust fault * sandstone * Lusatian Fault Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.405, year: 2014

  13. Secure method for biometric-based recognition with integrated cryptographic functions.

    Science.gov (United States)

    Chiou, Shin-Yan

    2013-01-01

    Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  14. Secure Method for Biometric-Based Recognition with Integrated Cryptographic Functions

    Directory of Open Access Journals (Sweden)

    Shin-Yan Chiou

    2013-01-01

    Full Text Available Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  15. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    Science.gov (United States)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  16. Robust Mpc for Actuator–Fault Tolerance Using Set–Based Passive Fault Detection and Active Fault Isolation

    Directory of Open Access Journals (Sweden)

    Xu Feng

    2017-03-01

    Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

  17. Determining on-fault magnitude distributions for a connected, multi-fault system

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2017-12-01

    A new method is developed to determine on-fault magnitude distributions within a complex and connected multi-fault system. A binary integer programming (BIP) method is used to distribute earthquakes from a 10 kyr synthetic regional catalog, with a minimum magnitude threshold of 6.0 and Gutenberg-Richter (G-R) parameters (a- and b-values) estimated from historical data. Each earthquake in the synthetic catalog can occur on any fault and at any location. In the multi-fault system, earthquake ruptures are allowed to branch or jump from one fault to another. The objective is to minimize the slip-rate misfit relative to target slip rates for each of the faults in the system. Maximum and minimum slip-rate estimates around the target slip rate are used as explicit constraints. An implicit constraint is that an earthquake can only be located on a fault (or series of connected faults) if it is long enough to contain that earthquake. The method is demonstrated in the San Francisco Bay area, using UCERF3 faults and slip-rates. We also invoke the same assumptions regarding background seismicity, coupling, and fault connectivity as in UCERF3. Using the preferred regional G-R a-value, which may be suppressed by the 1906 earthquake, the BIP problem is deemed infeasible when faults are not connected. Using connected faults, however, a solution is found in which there is a surprising diversity of magnitude distributions among faults. In particular, the optimal magnitude distribution for earthquakes that participate along the Peninsula section of the San Andreas fault indicates a deficit of magnitudes in the M6.0- 7.0 range. For the Rodgers Creek-Hayward fault combination, there is a deficit in the M6.0- 6.6 range. Rather than solving this as an optimization problem, we can set the objective function to zero and solve this as a constraint problem. Among the solutions to the constraint problem is one that admits many more earthquakes in the deficit magnitude ranges for both faults

  18. The 2014 Mw6.9 Gokceada and 2017 Mw6.3 Lesvos Earthquakes in the Northern Aegean Sea: The Transition from Right-Lateral Strike-Slip Faulting on the North Anatolian Fault to Extension in the Central Aegean

    Science.gov (United States)

    Cetin, S.; Konca, A. O.; Dogan, U.; Floyd, M.; Karabulut, H.; Ergintav, S.; Ganas, A.; Paradisis, D.; King, R. W.; Reilinger, R. E.

    2017-12-01

    The 2014 Mw6.9 Gokceada (strike-slip) and 2017 Mw6.3 Lesvos (normal) earthquakes represent two of the set of faults that accommodate the transition from right-lateral strike-slip faulting on the North Anatolian Fault (NAF) to normal faulting along the Gulf of Corinth. The Gokceada earthquake was a purely strike-slip event on the western extension of the NAF where it enters the northern Aegean Sea. The Lesvos earthquake, located roughly 200 km south of Gokceada, occurred on a WNW-ESE-striking normal fault. Both earthquakes respond to the same regional stress field, as indicated by their sub-parallel seismic tension axis and far-field coseismic GPS displacements. Interpretation of GPS-derived velocities, active faults, crustal seismicity, and earthquake focal mechanisms in the northern Aegean indicates that this pattern of complementary faulting, involving WNW-ESE-striking normal faults (e.g. Lesvos earthquake) and SW-NE-striking strike-slip faults (e.g. Gokceada earthquake), persists across the full extent of the northern Aegean Sea. The combination of these two "families" of faults, combined with some systems of conjugate left-lateral strike-slip faults, complement one another and culminate in the purely extensional rift structures that form the large Gulfs of Evvia and Corinth. In addition to being consistent with seismic and geodetic observations, these fault geometries explain the increasing velocity of the southern Aegean and Peloponnese regions towards the Hellenic subduction zone. Alignment of geodetic extension and seismic tension axes with motion of the southern Aegean towards the Hellenic subduction zone suggests a direct association of Aegean extension with subduction, possibly by trench retreat, as has been suggested by prior investigators.

  19. Energy-efficient fault tolerance in multiprocessor real-time systems

    Science.gov (United States)

    Guo, Yifeng

    The recent progress in the multiprocessor/multicore systems has important implications for real-time system design and operation. From vehicle navigation to space applications as well as industrial control systems, the trend is to deploy multiple processors in real-time systems: systems with 4 -- 8 processors are common, and it is expected that many-core systems with dozens of processing cores will be available in near future. For such systems, in addition to general temporal requirement common for all real-time systems, two additional operational objectives are seen as critical: energy efficiency and fault tolerance. An intriguing dimension of the problem is that energy efficiency and fault tolerance are typically conflicting objectives, due to the fact that tolerating faults (e.g., permanent/transient) often requires extra resources with high energy consumption potential. In this dissertation, various techniques for energy-efficient fault tolerance in multiprocessor real-time systems have been investigated. First, the Reliability-Aware Power Management (RAPM) framework, which can preserve the system reliability with respect to transient faults when Dynamic Voltage Scaling (DVS) is applied for energy savings, is extended to support parallel real-time applications with precedence constraints. Next, the traditional Standby-Sparing (SS) technique for dual processor systems, which takes both transient and permanent faults into consideration while saving energy, is generalized to support multiprocessor systems with arbitrary number of identical processors. Observing the inefficient usage of slack time in the SS technique, a Preference-Oriented Scheduling Framework is designed to address the problem where tasks are given preferences for being executed as soon as possible (ASAP) or as late as possible (ALAP). A preference-oriented earliest deadline (POED) scheduler is proposed and its application in multiprocessor systems for energy-efficient fault tolerance is

  20. Dynamic characteristics of dual-rotor system with coupling faults of misalignment and rub-impact

    Directory of Open Access Journals (Sweden)

    Huang Zhiwei

    2017-01-01

    Full Text Available According to fault problems of the rotor system local rubbing caused by mass eccentricity, a dynamic model for the dual-rotor system with coupling faults of misalignment and rub-impact was established. The dynamic behaviours of this system were investigated by using numerical integral method, as parallel and angular misalignment varied. Various nonlinear phenomena compressing periodic, three-periodic and quasi-periodic motions are observed. The results reveal that the process of the rotor rub-impact is extremely complex and has some frequencies with large amplitude, especially at the 1/3X component. Meanwhile, quasi-periodic regions exhibit different configurations of attractors and the phenomenon of beat vibration.

  1. Modelling of Surface Fault Structures Based on Ground Magnetic Survey

    Science.gov (United States)

    Michels, A.; McEnroe, S. A.

    2017-12-01

    The island of Leka confines the exposure of the Leka Ophiolite Complex (LOC) which contains mantle and crustal rocks and provides a rare opportunity to study the magnetic properties and response of these formations. The LOC is comprised of five rock units: (1) harzburgite that is strongly deformed, shifting into an increasingly olivine-rich dunite (2) ultramafic cumulates with layers of olivine, chromite, clinopyroxene and orthopyroxene. These cumulates are overlain by (3) metagabbros, which are cut by (4) metabasaltic dykes and (5) pillow lavas (Furnes et al. 1988). Over the course of three field seasons a detailed ground-magnetic survey was made over the island covering all units of the LOC and collecting samples from 109 sites for magnetic measurements. NRM, susceptibility, density and hysteresis properties were measured. In total 66% of samples with a Q value > 1, suggests that the magnetic anomalies should include both induced and remanent components in the model.This Ophiolite originated from a suprasubduction zone near the coast of Laurentia (497±2 Ma), was obducted onto Laurentia (≈460 Ma) and then transferred to Baltica during the Caledonide Orogeny (≈430 Ma). The LOC was faulted, deformed and serpentinized during these events. The gabbro and ultramafic rocks are separated by a normal fault. The dominant magnetic anomaly that crosses the island correlates with this normal fault. There are a series of smaller scale faults that are parallel to this and some correspond to local highs that can be highlighted by a tilt derivative of the magnetic data. These fault boundaries which are well delineated by the distinct magnetic anomalies in both ground and aeromagnetic survey data are likely caused by increased amount of serpentinization of the ultramafic rocks in the fault areas.

  2. Tomographic evidence for enhanced fracturing and permeability within the relatively aseismic Nemaha Fault Zone, Oklahoma

    Science.gov (United States)

    Stevens, N. T.; Keranen, K. M.; Lambert, C.

    2017-12-01

    Recent earthquakes in north central Oklahoma are dominantly hosted on unmapped basement faults away from and outside of the largest regional structure, the Nemaha Fault Zone (NFZ) [Lambert, 2016]. The NFZ itself remains largely aseismic, despite the presence of disposal wells and numerous faults. Here we present results from double-difference tomography using TomoDD [Zhang and Thurber, 2003] for the NFZ and the surrounding region, utilizing a seismic catalog of over 10,000 local events acquired by 144 seismic stations deployed between 2013 and 2017. Inversion results for shallow crustal depth, beneath the 2-3 km sedimentary cover, show compressional wavespeeds (Vp) of >6 km/sec and shear wavespeeds (Vs) >4 km/sec outside the NFZ, consistent with crystalline rock. Along the western margin of the NFZ, both Vp and Vs are reduced, and Vp/Vs gradients parallel the trend of major faults, suggesting enhanced fault density and potentially enhanced fluid pressure within the study region. Enhanced fracture density within the NFZ, and associated permeability enhancement, could reduce the effect of regional fluid pressurization from injection wells, contributing to the relative aseismicity of the NFZ.

  3. Fault-Tolerant Approach for Modular Multilevel Converters under Submodule Faults

    DEFF Research Database (Denmark)

    Deng, Fujin; Tian, Yanjun; Zhu, Rongwu

    2016-01-01

    The modular multilevel converter (MMC) is attractive for medium- or high-power applications because of the advantages of its high modularity, availability, and high power quality. The fault-tolerant operation is one of the important issues for the MMC. This paper proposed a fault-tolerant approach...... for the MMC under submodule (SM) faults. The characteristic of the MMC with arms containing different number of healthy SMs under faults is analyzed. Based on the characteristic, the proposed approach can effectively keep the MMC operation as normal under SM faults. It can effectively improve the MMC...

  4. An encoding device and a method of encoding

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention relates to an encoding device, such as an optical position encoder, for encoding input from an object, and a method for encoding input from an object, for determining a position of an object that interferes with light of the device. The encoding device comprises a light source...... in the area in the space and may interfere with the light, which interference may be encoded into a position or activation....

  5. Simultaneous Fault Detection and Sensor Selection for Condition Monitoring of Wind Turbines

    Directory of Open Access Journals (Sweden)

    Wenna Zhang

    2016-04-01

    Full Text Available Data collected from the supervisory control and data acquisition (SCADA system are used widely in wind farms to obtain operation and performance information about wind turbines. The paper presents a three-way model by means of parallel factor analysis (PARAFAC for wind turbine fault detection and sensor selection, and evaluates the method with SCADA data obtained from an operational farm. The main characteristic of this new approach is that it can be used to simultaneously explore measurement sample profiles and sensors profiles to avoid discarding potentially relevant information for feature extraction. With K-means clustering method, the measurement data indicating normal, fault and alarm conditions of the wind turbines can be identified, and the sensor array can be optimised for effective condition monitoring.

  6. Tacholess order-tracking approach for wind turbine gearbox fault detection

    Institute of Scientific and Technical Information of China (English)

    Yi WANG; Yong XIE; Guanghua XU; Sicong ZHANG; Chenggang HOU

    2017-01-01

    Monitoring of wind turbines under variablespeed operating conditions has become an important issue in recent years.The gearbox of a wind turbine is the most important transmission unit;it generally exhibits complex vibration signatures due to random variations in operating conditions.Spectral analysis is one of the main approaches in vibration signal processing.However,spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions.This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications.Although order-tracking methods have been proposed for wind turbine fault detection in recent years,current methods are only applicable to cases in which the instantaneous shaft phase is available.For wind turbines with limited structural spaces,collecting phase signals with tachometers or encoders is difficult.In this study,a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques.The proposed method extracts the instantaneous phase from the vibration signal,resamples the signal at equiangular increments,and calculates the order spectrum for wind turbine fault identification.The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.

  7. Tacholess order-tracking approach for wind turbine gearbox fault detection

    Science.gov (United States)

    Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang

    2017-09-01

    Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.

  8. Peripheral Faulting of Eden Patera: Potential Evidence in Support of a New Volcanic Construct on Mars

    Science.gov (United States)

    Harlow, J.

    2016-12-01

    Arabia Terra's (AT) pock-marked topography in the expansive upland region of Mars Northern Hemisphere has been assumed to be the result of impact crater bombardment. However, examination of several craters by researchers revealed morphologies inconsistent with neighboring craters of similar size and age. These 'craters' share features with terrestrial super-eruption calderas, and are considered a new volcanic construct on Mars called `plains-style' caldera complexes. Eden Patera (EP), located on the northern boundary of AT is a reference type for these calderas. EP lacks well-preserved impact crater morphologies, including a decreasing depth to diameter ratio. Conversely, Eden shares geomorphological attributes with terrestrial caldera complexes such as Valles Caldera (New Mexico): arcuate caldera walls, concentric fracturing/faulting, flat-topped benches, irregular geometric circumferences, etc. This study focuses on peripheral fractures surrounding EP to provide further evidence of calderas within the AT region. Scaled balloon experiments mimicking terrestrial caldera analogs have showcased fracturing/faulting patterns and relationships of caldera systems. These experiments show: 1) radial fracturing (perpendicular to caldera rim) upon inflation, 2) concentric faulting (parallel to sub-parallel to caldera rim) during evacuation, and 3) intersecting radial and concentric peripheral faulting from resurgence. Utilizing Mars Reconnaissance Orbiter Context Camera (CTX) imagery, peripheral fracturing is analyzed using GIS to study variations in peripheral fracture geometries relative to the caldera rim. Visually, concentric fractures dominate within 20 km, radial fractures prevail between 20 and 50 km, followed by gradation into randomly oriented and highly angular intersections in the fretted terrain region. Rose diagrams of orientation relative to north expose uniformly oriented mean regional stresses, but do not illuminate localized caldera stresses. Further

  9. Observations of strain accumulation across the san andreas fault near palmdale, california, with a two-color geodimeter.

    Science.gov (United States)

    Langbein, J O; Linker, M F; McGarr, A; Slater, L E

    1982-12-17

    Two-color laser ranging measurements during a 15-month period over a geodetic network spanning the San Andreas fault near Palmdale, California, indicate that the crust expands and contracts aseismically in episodes as short as 2 weeks. Shear strain parallel to the fault has accumulated monotonically since November 1980, but at a variable rate. Improvements in measurement precision and temporal resolution over those of previous geodetic studies near Palmdale have resulted in the definition of a time history of crustal deformation that is much more complex than formerly realized.

  10. Fault-Tolerant and Elastic Streaming MapReduce with Decentralized Coordination

    Energy Technology Data Exchange (ETDEWEB)

    Kumbhare, Alok [Univ. of Southern California, Los Angeles, CA (United States); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Simmhan, Yogesh [Indian Inst. of Technology (IIT), Bangalore (India); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-06-29

    The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) extend this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that inturn leads to fluctuations in the Quality of the Service (QoS); and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence of resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2:8 improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 -1500 ms from multiple failures.

  11. PMU-Aided Voltage Security Assessment for a Wind Power Plant: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, H.; Zhang, Y. C.; Zhang, J. J.; Muljadi, E.

    2015-04-08

    Because wind power penetration levels in electric power systems are continuously increasing, voltage stability is a critical issue for maintaining power system security and operation. The traditional methods to analyze voltage stability can be classified into two categories: dynamic and steady-state. Dynamic analysis relies on time-domain simulations of faults at different locations; however, this method needs to exhaust faults at all locations to find the security region for voltage at a single bus. With the widely located phasor measurement units (PMUs), the Thevenin equivalent matrix can be calculated by the voltage and current information collected by the PMUs. This paper proposes a method based on a Thevenin equivalent matrix to identify system locations that will have the greatest impact on the voltage at the wind power plant’s point of interconnection. The number of dynamic voltage stability analysis runs is greatly reduced by using the proposed method. The numerical results demonstrate the feasibility, effectiveness, and robustness of the proposed approach for voltage security assessment for a wind power plant.

  12. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  13. Modeling of flow in faulted and fractured media

    Energy Technology Data Exchange (ETDEWEB)

    Oeian, Erlend

    2004-03-01

    The work on this thesis has been done as part of a collaborative and inter disciplinary effort to improve the understanding of oil recovery mechanisms in fractured reservoirs. This project has been organized as a Strategic University Program (SUP) at the University of Bergen, Norway. The complex geometries of fractured reservoirs combined with flow of several fluid phases lead to difficult mathematical and numerical problems. In an effort to try to decrease the gap between the geological description and numerical modeling capabilities, new techniques are required. Thus, the main objective has been to improve the ATHENA flow simulator and utilize it within a fault modeling context. Specifically, an implicit treatment of the advection dominated mass transport equations within a domain decomposition based local grid refinement framework has been implemented. Since large computational tasks may arise, the implicit formulation has also been included in a parallel version of the code. Within the current limits of the simulator, appropriate up scaling techniques has also been considered. Part I of this thesis includes background material covering the basic geology of fractured porous media, the mathematical model behind the in-house flow simulator ATHENA and the additions implemented to approach simulation of flow through fractured and faulted porous media. In Part II, a set of research papers stemming from Part I is presented. A brief outline of the thesis follows below. In Chapt. 1 important aspects of the geological description and physical parameters of fractured and faulted porous media is presented. Based on this the scope of this thesis is specified having numerical issues and consequences in mind. Then, in Chapt. 2, the mathematical model and discretizations in the flow simulator is given followed by the derivation of the implicit mass transport formulation. In order to be fairly self-contained, most of the papers in Part II also includes the mathematical model

  14. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.; Michigan Univ., Ann Arbor, MI

    1991-01-01

    In order to use efficiently the new features of supercomputers, production codes, usually written 10 -20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedup in the execution times was obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize the parallel/vector architecture of a multiprocessor IBM 3090. (author)

  15. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.

    1991-01-01

    In order to efficiently use new features of supercomputers, production codes, usually written 10 - 20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedups in the execution times were obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize parallel/vector architecture of a multiprocessor IBM 3090. (author)

  16. Rectifier Fault Diagnosis and Fault Tolerance of a Doubly Fed Brushless Starter Generator

    Directory of Open Access Journals (Sweden)

    Liwei Shi

    2015-01-01

    Full Text Available This paper presents a rectifier fault diagnosis method with wavelet packet analysis to improve the fault tolerant four-phase doubly fed brushless starter generator (DFBLSG system reliability. The system components and fault tolerant principle of the high reliable DFBLSG are given. And the common fault of the rectifier is analyzed. The process of wavelet packet transforms fault detection/identification algorithm is introduced in detail. The fault tolerant performance and output voltage experiments were done to gather the energy characteristics with a voltage sensor. The signal is analyzed with 5-layer wavelet packets, and the energy eigenvalue of each frequency band is obtained. Meanwhile, the energy-eigenvalue tolerance was introduced to improve the diagnostic accuracy. With the wavelet packet fault diagnosis, the fault tolerant four-phase DFBLSG can detect the usual open-circuit fault and operate in the fault tolerant mode if there is a fault. The results indicate that the fault analysis techniques in this paper are accurate and effective.

  17. Faster quantum chemistry simulation on fault-tolerant quantum computers

    International Nuclear Information System (INIS)

    Cody Jones, N; McMahon, Peter L; Yamamoto, Yoshihisa; Whitfield, James D; Yung, Man-Hong; Aspuru-Guzik, Alán; Van Meter, Rodney

    2012-01-01

    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. We propose methods which substantially improve the performance of a particular form of simulation, ab initio quantum chemistry, on fault-tolerant quantum computers; these methods generalize readily to other quantum simulation problems. Quantum teleportation plays a key role in these improvements and is used extensively as a computing resource. To improve execution time, we examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay–Kitaev algorithm (Dawson and Nielsen 2006 Quantum Inform. Comput. 6 81). For a given approximation error ϵ, arbitrary single-qubit gates can be produced fault-tolerantly and using a restricted set of gates in time which is O(log ϵ) or O(log log ϵ); with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for lithium hydride. (paper)

  18. Short-circuit testing of monofilar Bi-2212 coils connected in series and in parallel

    International Nuclear Information System (INIS)

    Polasek, A; Dias, R; Serra, E T; Filho, O O; Niedu, D

    2010-01-01

    Superconducting Fault Current Limiters (SCFCL's) are one of the most promising technologies for fault current limitation. In the present work, resistive SCFCL components based on Bi-2212 monofilar coils are subjected to short-circuit testing. These SCFCL components can be easily connected in series and/or in parallel by using joints and clamps. This allows a considerable flexibility to developing larger SCFCL devices, since the configuration and size of the whole device can be easily adapted to the operational conditions. The single components presented critical current (Ic) values of 240-260 A, at 77 K. Short-circuits during 40-120 ms were applied. A single component can withstand a voltage drop of 126-252 V (0.3-0.6 V/cm). Components connected in series withstand higher voltage levels, whereas parallel connection allows higher rated currents during normal operation, but the limited current is also higher. Prospective currents as high as 10-40 kA (peak value) were limited to 3-9 kA (peak value) in the first half cycle.

  19. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin

    2015-02-03

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  20. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin; Dutta, Rishabh; Jonsson, Sigurjon

    2015-01-01

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  1. Fault displacement along the Naruto-South fault, the Median Tectonic Line active fault system in the eastern part of Shikoku, southwestern Japan

    OpenAIRE

    高田, 圭太; 中田, 高; 後藤, 秀昭; 岡田, 篤正; 原口, 強; 松木, 宏彰

    1998-01-01

    The Naruto-South fault is situated of about 1000m south of the Naruto fault, the Median Tectonic Line active fault system in the eastern part of Shikoku. We investigated fault topography and subsurface geology of this fault by interpretation of large scale aerial photographs, collecting borehole data and Geo-Slicer survey. The results obtained are as follows; 1) The Naruto-South fault runs on the Yoshino River deltaic plain at least 2.5 km long with fault scarplet. the Naruto-South fault is o...

  2. Dissolved Gas Analysis Principle-Based Intelligent Approaches to Fault Diagnosis and Decision Making for Large Oil-Immersed Power Transformers: A Survey

    Directory of Open Access Journals (Sweden)

    Lefeng Cheng

    2018-04-01

    Full Text Available Compared with conventional methods of fault diagnosis for power transformers, which have defects such as imperfect encoding and too absolute encoding boundaries, this paper systematically discusses various intelligent approaches applied in fault diagnosis and decision making for large oil-immersed power transformers based on dissolved gas analysis (DGA, including expert system (EPS, artificial neural network (ANN, fuzzy theory, rough sets theory (RST, grey system theory (GST, swarm intelligence (SI algorithms, data mining technology, machine learning (ML, and other intelligent diagnosis tools, and summarizes existing problems and solutions. From this survey, it is found that a single intelligent approach for fault diagnosis can only reflect operation status of the transformer in one particular aspect, causing various degrees of shortcomings that cannot be resolved effectively. Combined with the current research status in this field, the problems that must be addressed in DGA-based transformer fault diagnosis are identified, and the prospects for future development trends and research directions are outlined. This contribution presents a detailed and systematic survey on various intelligent approaches to faults diagnosing and decisions making of the power transformer, in which their merits and demerits are thoroughly investigated, as well as their improvement schemes and future development trends are proposed. Moreover, this paper concludes that a variety of intelligent algorithms should be combined for mutual complementation to form a hybrid fault diagnosis network, such that avoiding these algorithms falling into a local optimum. Moreover, it is necessary to improve the detection instruments so as to acquire reasonable characteristic gas data samples. The research summary, empirical generalization and analysis of predicament in this paper provide some thoughts and suggestions for the research of complex power grid in the new environment, as

  3. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  4. Robust Fault Diagnosis Design for Linear Multiagent Systems with Incipient Faults

    Directory of Open Access Journals (Sweden)

    Jingping Xia

    2015-01-01

    Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

  5. Stafford fault system: 120 million year fault movement history of northern Virginia

    Science.gov (United States)

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  6. Feasibility study of a real-time operating system for a multichannel MPEG-4 encoder

    Science.gov (United States)

    Lehtoranta, Olli; Hamalainen, Timo D.

    2005-03-01

    Feasibility of DSP/BIOS real-time operating system for a multi-channel MPEG-4 encoder is studied. Performances of two MPEG-4 encoder implementations with and without the operating system are compared in terms of encoding frame rate and memory requirements. The effects of task switching frequency and number of parallel video channels to the encoding frame rate are measured. The research is carried out on a 200 MHz TMS320C6201 fixed point DSP using QCIF (176x144 pixels) video format. Compared to a traditional DSP implementation without an operating system, inclusion of DSP/BIOS reduces total system throughput only by 1 QCIF frames/s. The operating system has 6 KB data memory overhead and program memory requirement of 15.7 KB. Hence, the overhead is considered low enough for resource critical mobile video applications.

  7. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  8. Continental deformation accommodated by non-rigid passive bookshelf faulting: An example from the Cenozoic tectonic development of northern Tibet

    Science.gov (United States)

    Zuza, Andrew V.; Yin, An

    2016-05-01

    Collision-induced continental deformation commonly involves complex interactions between strike-slip faulting and off-fault deformation, yet this relationship has rarely been quantified. In northern Tibet, Cenozoic deformation is expressed by the development of the > 1000-km-long east-striking left-slip Kunlun, Qinling, and Haiyuan faults. Each have a maximum slip in the central fault segment exceeding 10s to ~ 100 km but a much smaller slip magnitude (~bookshelf-fault model for the Cenozoic tectonic development of northern Tibet. Our model, quantitatively relating discrete left-slip faulting to distributed off-fault deformation during regional clockwise rotation, explains several puzzling features, including the: (1) clockwise rotation of east-striking left-slip faults against the northeast-striking left-slip Altyn Tagh fault along the northwestern margin of the Tibetan Plateau, (2) alternating fault-parallel extension and shortening in the off-fault regions, and (3) eastward-tapering map-view geometries of the Qimen Tagh, Qaidam, and Qilian Shan thrust belts that link with the three major left-slip faults in northern Tibet. We refer to this specific non-rigid bookshelf-fault system as a passive bookshelf-fault system because the rotating bookshelf panels are detached from the rigid bounding domains. As a consequence, the wallrock of the strike-slip faults deforms to accommodate both the clockwise rotation of the left-slip faults and off-fault strain that arises at the fault ends. An important implication of our model is that the style and magnitude of Cenozoic deformation in northern Tibet vary considerably in the east-west direction. Thus, any single north-south cross section and its kinematic reconstruction through the region do not properly quantify the complex deformational processes of plateau formation.

  9. Seismicity rate surge on faults after shut-in: poroelastic response to fluid injection

    Science.gov (United States)

    Chang, K. W.; Yoon, H.; Martinez, M. J.

    2017-12-01

    Subsurface energy activities such as geological CO2 storage and wastewater injection require injecting large amounts of fluid into the subsurface, which will alter the states of pore pressure and stress in the storage formation. One of the main issues for injection-induced seismicity is the post shut-in increases in the seismicity rate, often observed in the fluid-injection operation sites. The rate surge can be driven by the following mechanisms: (1) pore-pressure propagation into distant faults after shut-in and (2) poroelastic stressing caused by well operations, depending on fault geometry, hydraulic and mechanical properties of the formation, and injection history. We simulate the aerial view of the target reservoir intersected by strike-slip faults, in which injection-induced pressure buildup encounters the faults directly. We examine the poroelastic response of the faults to fluid injection and perform a series of sensitivity tests considering: (1) permeability of the fault zone, (2) locations and the number of faults with respect to the injection point, and (3) well operations with varying the injection rate. Our analysis of the Coulomb stress change suggests that the sealing fault confines pressure diffusion which stabilizes or weakens the nearby conductive fault depending on the injection location. We perform the sensitivity test by changing injection scenarios (time-dependent rates), while keeping the total amount of injected fluids. Sensitivity analysis shows that gradual reduction of the injection rate minimizes the Coulomb stress change and the least seismicity rates are predicted. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  10. Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults

    Science.gov (United States)

    Keener, Charles; Serpa, Laura; Pavlis, Terry L.

    1993-04-01

    New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

  11. Newport-Inglewood-Carlsbad-Coronado Bank Fault System Nearshore Southern California: Testing models for Quaternary deformation

    Science.gov (United States)

    Bennett, J. T.; Sorlien, C. C.; Cormier, M.; Bauer, R. L.

    2011-12-01

    The San Andreas fault system is distributed across hundreds of kilometers in southern California. This transform system includes offshore faults along the shelf, slope and basin- comprising part of the Inner California Continental Borderland. Previously, offshore faults have been interpreted as being discontinuous and striking parallel to the coast between Long Beach and San Diego. Our recent work, based on several thousand kilometers of deep-penetration industry multi-channel seismic reflection data (MCS) as well as high resolution U.S. Geological Survey MCS, indicates that many of the offshore faults are more geometrically continuous than previously reported. Stratigraphic interpretations of MCS profiles included the ca. 1.8 Ma Top Lower Pico, which was correlated from wells located offshore Long Beach (Sorlien et. al. 2010). Based on this age constraint, four younger (Late) Quaternary unconformities are interpreted through the slope and basin. The right-lateral Newport-Inglewood fault continues offshore near Newport Beach. We map a single fault for 25 kilometers that continues to the southeast along the base of the slope. There, the Newport-Inglewood fault splits into the San Mateo-Carlsbad fault, which is mapped for 55 kilometers along the base of the slope to a sharp bend. This bend is the northern end of a right step-over of 10 kilometers to the Descanso fault and about 17 km to the Coronado Bank fault. We map these faults for 50 kilometers as they continue over the Mexican border. Both the San Mateo - Carlsbad with the Newport-Inglewood fault and the Coronado Bank with the Descanso fault are paired faults that form flower structures (positive and negative, respectively) in cross section. Preliminary kinematic models indicate ~1km of right-lateral slip since ~1.8 Ma at the north end of the step-over. We are modeling the slip on the southern segment to test our hypothesis for a kinematically continuous right-lateral fault system. We are correlating four

  12. Paleoseismic study of the Cathedral Rapids fault in the northern Alaska Range near Tok, Alaska

    Science.gov (United States)

    Koehler, R. D.; Farrell, R.; Carver, G. A.

    2010-12-01

    The Cathedral Rapids fault extends ~40 km between the Tok and Robertson River valleys and is the easternmost fault in a series of active south-dipping imbricate thrust faults which bound the northern flank of the Alaska Range. Collectively, these faults accommodate a component of convergence transferred north of the Denali fault and related to the westward (counterclockwise) rotation of the Wrangell Block driven by relative Pacific/North American plate motion along the eastern Aleutian subduction zone and Fairweather fault system. To the west, the system has been defined as the Northern Foothills Fold and Thrust Belt (NFFTB), a 50-km-wide zone of east-west trending thrust faults that displace Quaternary deposits and have accommodated ~3 mm/yr of shortening since latest Pliocene time (Bemis, 2004). Over the last several years, the eastward extension of the NFFTB between Delta Junction and the Canadian border has been studied by the Alaska Division of Geological & Geophysical Surveys to better characterize faults that may affect engineering design of the proposed Alaska-Canada natural gas pipeline and other infrastructure. We summarize herein reconnaissance field observations along the western part of the Cathedral Rapids fault. The western part of the Cathedral Rapids fault extends 21 km from Sheep Creek to Moon Lake and is characterized by three roughly parallel sinuous traces that offset glacial deposits of the Illinoian to early Wisconsinan Delta glaciations and the late Wisconsinan Donnelly glaciation, as well as, Holocene alluvial deposits. The northern trace of the fault is characterized by an oversteepened, beveled, ~2.5-m-high scarp that obliquely cuts a Holocene alluvial fan and projects into the rangefront. Previous paleoseismic studies along the eastern part of the Cathedral Rapids fault and Dot “T” Johnson fault indicate multiple latest Pleistocene and Holocene earthquakes associated with anticlinal folding and thrust faulting (Carver et al., 2010

  13. Real-time fault diagnosis and fault-tolerant control

    OpenAIRE

    Gao, Zhiwei; Ding, Steven X.; Cecati, Carlo

    2015-01-01

    This "Special Section on Real-Time Fault Diagnosis and Fault-Tolerant Control" of the IEEE Transactions on Industrial Electronics is motivated to provide a forum for academic and industrial communities to report recent theoretic/application results in real-time monitoring, diagnosis, and fault-tolerant design, and exchange the ideas about the emerging research direction in this field. Twenty-three papers were eventually selected through a strict peer-reviewed procedure, which represent the mo...

  14. Fault kinematics and localised inversion within the Troms-Finnmark Fault Complex, SW Barents Sea

    Science.gov (United States)

    Zervas, I.; Omosanya, K. O.; Lippard, S. J.; Johansen, S. E.

    2018-04-01

    The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

  15. Paleoseismic evidence for late Holocene tectonic deformation along the Saddle mountain fault zone, Southeastern Olympic Peninsula, Washington

    Science.gov (United States)

    Barnett, Elizabeth; Sherrod, Brian; Hughes, Jonathan F.; Kelsey, Harvey M.; Czajkowski, Jessica L.; Walsh, Timothy J.; Contreras, Trevor A.; Schermer, Elizabeth R.; Carson, Robert J.

    2015-01-01

    Trench and wetland coring studies show that northeast‐striking strands of the Saddle Mountain fault zone ruptured the ground about 1000 years ago, generating prominent scarps. Three conspicuous subparallel fault scarps can be traced for 15 km on Light Detection and Ranging (LiDAR) imagery, traversing the foothills of the southeast Olympic Mountains: the Saddle Mountain east fault, the Saddle Mountain west fault, and the newly identified Sund Creek fault. Uplift of the Saddle Mountain east fault scarp impounded stream flow, forming Price Lake and submerging an existing forest, thereby leaving drowned stumps still rooted in place. Stratigraphy mapped in two trenches, one across the Saddle Mountain east fault and the other across the Sund Creek fault, records one and two earthquakes, respectively, as faulting juxtaposed Miocene‐age bedrock against glacial and postglacial deposits. Although the stratigraphy demonstrates that reverse motion generated the scarps, slip indicators measured on fault surfaces suggest a component of left‐lateral slip. From trench exposures, we estimate the postglacial slip rate to be 0.2  mm/yr and between 0.7 and 3.2  mm/yr during the past 3000 years. Integrating radiocarbon data from this study with earlier Saddle Mountain fault studies into an OxCal Bayesian statistical chronology model constrains the most recent paleoearthquake age of rupture across all three Saddle Mountain faults to 1170–970 calibrated years (cal B.P.), which overlaps with the nearby Mw 7.5 1050–1020 cal B.P. Seattle fault earthquake. An earlier earthquake recorded in the Sund Creek trench exposure, dates to around 3500 cal B.P. The geometry of the Saddle Mountain faults and their near‐synchronous rupture to nearby faults 1000 years ago suggest that the Saddle Mountain fault zone forms a western boundary fault along which the fore‐arc blocks migrate northward in response to margin‐parallel shortening across the Puget Lowland.

  16. Nuclear Power Plant Cyber Security Discrete Dynamic Event Tree Analysis (LDRD 17-0958) FY17 Report

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler, Timothy A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, R. A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jankovsky, Zachary Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Instrumentation and control of nuclear power is transforming from analog to modern digital assets. These control systems perform key safety and security functions. This transformation is occurring in new plant designs as well as in the existing fleet of plants as the operation of those plants is extended to 60 years. This transformation introduces new and unknown issues involving both digital asset induced safety issues and security issues. Traditional nuclear power risk assessment tools and cyber security assessment methods have not been modified or developed to address the unique nature of cyber failure modes and of cyber security threat vulnerabilities. iii This Lab-Directed Research and Development project has developed a dynamic cyber-risk in- formed tool to facilitate the analysis of unique cyber failure modes and the time sequencing of cyber faults, both malicious and non-malicious, and impose those cyber exploits and cyber faults onto a nuclear power plant accident sequence simulator code to assess how cyber exploits and cyber faults could interact with a plants digital instrumentation and control (DI&C) system and defeat or circumvent a plants cyber security controls. This was achieved by coupling an existing Sandia National Laboratories nuclear accident dynamic simulator code with a cyber emulytics code to demonstrate real-time simulation of cyber exploits and their impact on automatic DI&C responses. Studying such potential time-sequenced cyber-attacks and their risks (i.e., the associated impact and the associated degree of difficulty to achieve the attack vector) on accident management establishes a technical risk informed framework for developing effective cyber security controls for nuclear power.

  17. Development on multifunctional phased-array fault inspection technology. Aiming at integrity on internals in nuclear power plant reactors

    International Nuclear Information System (INIS)

    Komura, Ichiro; Hirasawa, Taiji; Nagai, Satoshi; Naruse, Katsuhiko

    2002-01-01

    On nuclear power plants sharing an important role in Japanese energy policy, their higher safety and reliability than the other plants are required, and their non-destructive inspection occupies important position for information means to judge their integrity. And, for a part of responses to recent rationalization of the plant operation and increase of aged plants, requirements and positioning onto the non-destructive inspection technology also change. As a result, not only concept on allowable fault sizes is adopted, but also inspection on reactor internals without conventional regulation is obliged to require for size evaluation (sizing) with higher precision to use for secure detection and integrity evaluation of the faults than sizes determined for every internals. For requirement with such higher levels for fault detection and sizing, and for requirement for effective inspection, phased-array supersonic wave fault inspection method is one of the methods with high potential power. Here were introduced on principles and characteristics of the phased-array supersonic wave fault inspection method, and on various fault inspection methods and functions mainly developed for reactor internals inspection. (G.K.)

  18. Typing and compositionality for security protocols: A generalization to the geometric fragment

    DEFF Research Database (Denmark)

    Almousa, Omar; Mödersheim, Sebastian Alexander; Modesti, Paolo

    2015-01-01

    We integrate, and improve upon, prior relative soundness results of two kinds. The first kind are typing results showing that any security protocol that fulfils a number of sufficient conditions has an attack if it has a well-typed attack. The second kind considers the parallel composition of pro...... of protocols, showing that when running two protocols in parallel allows for an attack, then at least one of the protocols has an attack in isolation. The most important generalization over previous work is the support for all security properties of the geometric fragment.......We integrate, and improve upon, prior relative soundness results of two kinds. The first kind are typing results showing that any security protocol that fulfils a number of sufficient conditions has an attack if it has a well-typed attack. The second kind considers the parallel composition...

  19. Nests of red wood ants (Formica rufa-group) are positively associated with tectonic faults: a double-blind test.

    Science.gov (United States)

    Del Toro, Israel; Berberich, Gabriele M; Ribbons, Relena R; Berberich, Martin B; Sanders, Nathan J; Ellison, Aaron M

    2017-01-01

    Ecological studies often are subjected to unintentional biases, suggesting that improved research designs for hypothesis testing should be used. Double-blind ecological studies are rare but necessary to minimize sampling biases and omission errors, and improve the reliability of research. We used a double-blind design to evaluate associations between nests of red wood ants ( Formica rufa , RWA) and the distribution of tectonic faults. We randomly sampled two regions in western Denmark to map the spatial distribution of RWA nests. We then calculated nest proximity to the nearest active tectonic faults. Red wood ant nests were eight times more likely to be found within 60 m of known tectonic faults than were random points in the same region but without nests. This pattern paralleled the directionality of the fault system, with NNE-SSW faults having the strongest associations with RWA nests. The nest locations were collected without knowledge of the spatial distribution of active faults thus we are confident that the results are neither biased nor artefactual. This example highlights the benefits of double-blind designs in reducing sampling biases, testing controversial hypotheses, and increasing the reliability of the conclusions of research.

  20. Adaptation of superconducting fault current limiter to high-speed reclosing

    International Nuclear Information System (INIS)

    Koyama, T.; Yanabu, S.

    2009-01-01

    Using a high temperature superconductor, we constructed and tested a model superconducting fault current limiter (SFCL). The superconductor might break in some cases because of its excessive generation of heat. Therefore, it is desirable to interrupt early the current that flows to superconductor. So, we proposed the SFCL using an electromagnetic repulsion switch which is composed of a superconductor, a vacuum interrupter and a by-pass coil, and its structure is simple. Duration that the current flow in the superconductor can be easily minimized to the level of less than 0.5 cycle using this equipment. On the other hand, the fault current is also easily limited by large reactance of the parallel coil. There is duty of high-speed reclosing after interrupting fault current in the electric power system. After the fault current is interrupted, the back-up breaker is re-closed within 350 ms. So, the electromagnetic repulsion switch should return to former state and the superconductor should be recovered to superconducting state before high-speed reclosing. Then, we proposed the SFCL using an electromagnetic repulsion switch which employs our new reclosing function. We also studied recovery time of the superconductor, because superconductor should be recovered to superconducting state within 350 ms. In this paper, the recovery time characteristics of the superconducting wire were investigated. Also, we combined the superconductor with the electromagnetic repulsion switch, and we did performance test. As a result, a high-speed reclosing within 350 ms was proven to be possible.

  1. SECURE VISUAL SECRET SHARING BASED ON DISCRETE WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    S. Jyothi Lekshmi

    2015-08-01

    Full Text Available Visual Cryptography Scheme (VCS is an encryption method to encode secret written materials. This method converts the secret written material into an image. Then encode this secret image into n shadow images called shares. For the recreation of the original secret, all or some selected subsets of shares are needed; individual shares are of no use on their own. The secret image can be recovered simply by selecting some subset of these n shares, makes transparencies of them and stacking on top of each other. Nowadays, the data security has an important role. The shares can be altered by an attacker. So providing security to the shares is important. This paper proposes a method of adding security to cryptographic shares. This method uses two dimensional discrete wavelet transform to hide visual secret shares. Then the hidden secrets are distributed among participants through the internet. All hidden shares are extracted to reconstruct the secret.

  2. A Thermal Technique of Fault Nucleation, Growth, and Slip

    Science.gov (United States)

    Garagash, D.; Germanovich, L. N.; Murdoch, L. C.; Martel, S. J.; Reches, Z.; Elsworth, D.; Onstott, T. C.

    2009-12-01

    Fractures and fluids influence virtually all mechanical processes in the crust, but many aspects of these processes remain poorly understood largely because of a lack of controlled field experiments at appropriate scale. We have developed an in-situ experimental approach to create carefully controlled faults at scale of ~10 meters using thermal techniques to modify in situ stresses to the point where the rock fails in shear. This approach extends experiments on fault nucleation and growth to length scales 2-3 orders of magnitude greater than are currently possible in the laboratory. The experiments could be done at depths where the modified in situ stresses are sufficient to drive faulting, obviating the need for unrealistically large loading frames. Such experiments require an access to large rock volumes in the deep subsurface in a controlled setting. The Deep Underground Science and Engineering Laboratory (DUSEL), which is a research facility planned to occupy the workings of the former Homestake gold mine in the northern Black Hills, South Dakota, presents an opportunity for accessing locations with vertical stresses as large as 60 MPa (down to 2400 m depth), which is sufficient to create faults. One of the most promising methods for manipulating stresses to create faults that we have evaluated involves drilling two parallel planar arrays of boreholes and circulating cold fluid (e.g., liquid nitrogen) to chill the region in the vicinity of the boreholes. Cooling a relatively small region around each borehole causes the rock to contract, reducing the normal compressive stress throughout much larger region between the arrays of boreholes. This scheme was evaluated using both scaling analysis and a finite element code. Our results show that if the boreholes are spaced by ~1 m, in several days to weeks, the normal compressive stress can be reduced by 10 MPa or more, and it is even possible to create net tension between the borehole arrays. According to the Mohr

  3. The Non-Regularity of Earthquake Recurrence in California: Lessons From Long Paleoseismic Records in Simple vs Complex Fault Regions (Invited)

    Science.gov (United States)

    Rockwell, T. K.

    2010-12-01

    A long paleoseismic record at Hog Lake on the central San Jacinto fault (SJF) in southern California documents evidence for 18 surface ruptures in the past 3.8-4 ka. This yields a long-term recurrence interval of about 210 years, consistent with its slip rate of ~16 mm/yr and field observations of 3-4 m of displacement per event. However, during the past 3800 years, the fault has switched from a quasi-periodic mode of earthquake production, during which the recurrence interval is similar to the long-term average, to clustered behavior with the inter-event periods as short as a few decades. There are also some periods as long as 450 years during which there were no surface ruptures, and these periods are commonly followed by one to several closely-timed ruptures. The coefficient of variation (CV) for the timing of these earthquakes is about 0.6 for the past 4000 years (17 intervals). Similar behavior has been observed on the San Andreas Fault (SAF) south of the Transverse Ranges where clusters of earthquakes have been followed by periods of lower seismic production, and the CV is as high as 0.7 for some portions of the fault. In contrast, the central North Anatolian Fault (NAF) in Turkey, which ruptured in 1944, appears to have produced ruptures with similar displacement at fairly regular intervals for the past 1600 years. With a CV of 0.16 for timing, and close to 0.1 for displacement, the 1944 rupture segment near Gerede appears to have been both periodic and characteristic. The SJF and SAF are part of a broad plate boundary system with multiple parallel strands with significant slip rates. Additional faults lay to the east (Eastern California shear zone) and west (faults of the LA basin and southern California Borderland), which makes the southern SAF system a complex and broad plate boundary zone. In comparison, the 1944 rupture section of the NAF is simple, straight and highly localized, which contrasts with the complex system of parallel faults in southern

  4. Simulation of fault-bend fold by incompressible Newtonian fluid; Hiasshukusei Newton ryutai ni yoru danso oremagari shukyoku kozo no simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tamagawa, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan); Tsukui, R [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-10-22

    Incompressible Newtonian fluid simulation is experimentally applied to faults typical of the compression and extension fields. A fault-bend folding structure of the flat-ramp flat fault in the compression field and a folding structure of a normal fault in the extension field are studied, and the results are compared with those obtained by the balanced cross section method. The result of calculation indicates that the velocity gradient with the ramp angle set at 30deg is correspondent to stress and that stress concentration is taking place at the ramp section of the fault. This solution is an approximation and does not necessary support the conservation of area but, when the ramp angle is allowed to change from 10 through 40deg, it is found that the conservation of area holds though roughly. It is found that the configuration of the folding structure formed by a flat-ramp flat fault is positioned between the anomalous-mode layer parallel shear typical of a balanced cross section and the folding structure formed by a vertical shear. 7 refs., 7 figs.

  5. Security and policy driven computing

    CERN Document Server

    Liu, Lei

    2010-01-01

    Security and Policy Driven Computing covers recent advances in security, storage, parallelization, and computing as well as applications. The author incorporates a wealth of analysis, including studies on intrusion detection and key management, computer storage policy, and transactional management.The book first describes multiple variables and index structure derivation for high dimensional data distribution and applies numeric methods to proposed search methods. It also focuses on discovering relations, logic, and knowledge for policy management. To manage performance, the text discusses con

  6. Achieving privacy-preserving big data aggregation with fault tolerance in smart grid

    Directory of Open Access Journals (Sweden)

    Zhitao Guan

    2017-11-01

    Full Text Available In a smart grid, a huge amount of data is collected for various applications, such as load monitoring and demand response. These data are used for analyzing the power state and formulating the optimal dispatching strategy. However, these big energy data in terms of volume, velocity and variety raise concern over consumers’ privacy. For instance, in order to optimize energy utilization and support demand response, numerous smart meters are installed at a consumer's home to collect energy consumption data at a fine granularity, but these fine-grained data may contain information on the appliances and thus the consumer's behaviors at home. In this paper, we propose a privacy-preserving data aggregation scheme based on secret sharing with fault tolerance in a smart grid, which ensures that the control center obtains the integrated data without compromising privacy. Meanwhile, we also consider fault tolerance and resistance to differential attack during the data aggregation. Finally, we perform a security analysis and performance evaluation of our scheme in comparison with the other similar schemes. The analysis shows that our scheme can meet the security requirement, and it also shows better performance than other popular methods.

  7. Distance-Ranked Fault Identification of Reconfigurable Hardware Bitstreams via Functional Input

    Directory of Open Access Journals (Sweden)

    Naveed Imran

    2014-01-01

    Full Text Available Distance-Ranked Fault Identification (DRFI is a dynamic reconfiguration technique which employs runtime inputs to conduct online functional testing of fielded FPGA logic and interconnect resources without test vectors. At design time, a diverse set of functionally identical bitstream configurations are created which utilize alternate hardware resources in the FPGA fabric. An ordering is imposed on the configuration pool as updated by the PageRank indexing precedence. The configurations which utilize permanently damaged resources and hence manifest discrepant outputs, receive lower rank are thus less preferred for instantiation on the FPGA. Results indicate accurate identification of fault-free configurations in a pool of pregenerated bitstreams with a low number of reconfigurations and input evaluations. For MCNC benchmark circuits, the observed reduction in input evaluations is up to 75% when comparing the DRFI technique to unguided evaluation. The DRFI diagnosis method is seen to isolate all 14 healthy configurations from a pool of 100 pregenerated configurations, and thereby offering a 100% isolation accuracy provided the fault-free configurations exist in the design pool. When a complete recovery is not feasible, graceful degradation may be realized which is demonstrated by the PSNR improvement of images processed in a video encoder case study.

  8. Transpressional deformation style and AMS fabrics adjacent to the southernmost segment of the San Andreas fault, Durmid Hill, CA

    Science.gov (United States)

    French, M.; Wojtal, S. F.; Housen, B.

    2006-12-01

    In the Salton Trough, the trace of the San Andreas Fault (SAF) ends where it intersects the NNW-trending Brawley seismic zone at Durmid Hill (DH). The topographic relief of DH is a product of faulting and folding of Pleistocene Borrego Formation strata (Babcock, 1974). Burgmann's (1991) detailed mapping and analysis of the western part of DH showed that the folds and faults accommodate transpression. Key to Burgmann's work was the recognition that the ~2m thick Bishop Ash, a prominent marker horizon, has been elongated parallel to the hinges of folds and boudinaged. We are mapping in detail the eastern portion of DH, nearer to the trace of the SAF. Folds in the eastern part of DH are tighter and thrust faulting is more prominent, consistent with greater shortening magnitude oblique to the SAF. Boudinage of the ash layer again indicates elongation parallel to fold hinges and subparallel to the SAF. The Bishop Ash locally is limbs in eastern DH, suggesting that significant continuous deformation accompanied the development of map-scale features. We measured anisotropy of magnetic susceptibility (AMS) fabrics in the Bishop Ash in order to assess continuous deformation in the Ash at DH. Because the Bishop Ash at DH is altered, consisting mainly of silica glass and clay minerals, samples from DH have significantly lower magnetic susceptibilities than Bishop Ash samples from elsewhere in the Salton Trough. With such low susceptibilities, there is significant scatter in the orientation of magnetic foliation and lineation in our samples. Still, in some Bishop samples within 1 km of the SAF, magnetic foliation is consistent with fold-related flattening. Magnetic lineation in these samples is consistently sub-parallel to fold hinges, parallel to the elongation direction inferred from boudinage. Even close to the trace of the SAF, this correlation breaks down in map-scale zones where fold hinge lines change attitude, fold shapes change, and the distribution and orientations

  9. Novel scheme for enhancement of fault ride-through capability of doubly fed induction generator based wind farms

    International Nuclear Information System (INIS)

    Vinothkumar, K.; Selvan, M.P.

    2011-01-01

    Research highlights: → Proposed Fault ride-through (FRT) scheme for DFIG is aimed at energy conservation. → The input mechanical energy is stored during fault and utilized at fault clearance. → Enhanced Rotor speed stability of DFIG. → Reduced Reactive power requirement and rapid voltage recovery at fault clearance. → Improved post fault performance of DFIG at fault clearance. -- Abstract: Enhancement of fault ride-through (FRT) capability and subsequent improvement of rotor speed stability of wind farms equipped with doubly fed induction generator (DFIG) is the objective of this paper. The objective is achieved by employing a novel FRT scheme with suitable control strategy. The proposed FRT scheme, which is connected between the rotor circuit and dc link capacitor in parallel with Rotor Side Converter, consists of an uncontrolled rectifier, two sets of IGBT switches, a diode and an inductor. In this scheme, the input mechanical energy of the wind turbine during grid fault is stored and utilized at the moment of fault clearance, instead of being dissipated in the resistors of the crowbar circuit as in the existing FRT schemes. Consequently, torque balance between the electrical and mechanical quantities is achieved and hence the rotor speed deviation and electromagnetic torque fluctuations are reduced. This results in reduced reactive power requirement and rapid reestablishment of terminal voltage on fault clearance. Furthermore, the stored electromagnetic energy in the inductor is transferred into the dc link capacitor on fault clearance and hence the grid side converter is relieved from charging the dc link capacitor, which is very crucial at this moment, and this converter can be utilized to its full capacity for rapid restoration of terminal voltage and normal operation of DFIG. Extensive simulation study carried out employing PSCAD/EMTDC software vividly demonstrates the potential capabilities of the proposed scheme in enhancing the performance of

  10. Novel scheme for enhancement of fault ride-through capability of doubly fed induction generator based wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Vinothkumar, K. [Department of Electrical and Electronics Engineering, National Institute of Technology, Tiruchirappalli, Tamilnadu 620015 (India); Selvan, M.P., E-mail: selvanmp@nitt.ed [Department of Electrical and Electronics Engineering, National Institute of Technology, Tiruchirappalli, Tamilnadu 620015 (India)

    2011-07-15

    Research highlights: {yields} Proposed Fault ride-through (FRT) scheme for DFIG is aimed at energy conservation. {yields} The input mechanical energy is stored during fault and utilized at fault clearance. {yields} Enhanced Rotor speed stability of DFIG. {yields} Reduced Reactive power requirement and rapid voltage recovery at fault clearance. {yields} Improved post fault performance of DFIG at fault clearance. -- Abstract: Enhancement of fault ride-through (FRT) capability and subsequent improvement of rotor speed stability of wind farms equipped with doubly fed induction generator (DFIG) is the objective of this paper. The objective is achieved by employing a novel FRT scheme with suitable control strategy. The proposed FRT scheme, which is connected between the rotor circuit and dc link capacitor in parallel with Rotor Side Converter, consists of an uncontrolled rectifier, two sets of IGBT switches, a diode and an inductor. In this scheme, the input mechanical energy of the wind turbine during grid fault is stored and utilized at the moment of fault clearance, instead of being dissipated in the resistors of the crowbar circuit as in the existing FRT schemes. Consequently, torque balance between the electrical and mechanical quantities is achieved and hence the rotor speed deviation and electromagnetic torque fluctuations are reduced. This results in reduced reactive power requirement and rapid reestablishment of terminal voltage on fault clearance. Furthermore, the stored electromagnetic energy in the inductor is transferred into the dc link capacitor on fault clearance and hence the grid side converter is relieved from charging the dc link capacitor, which is very crucial at this moment, and this converter can be utilized to its full capacity for rapid restoration of terminal voltage and normal operation of DFIG. Extensive simulation study carried out employing PSCAD/EMTDC software vividly demonstrates the potential capabilities of the proposed scheme in

  11. Geothermal and seismic evidence for a southeastern continuation of the three pagodas fault zone into the Gulf of Thailand

    Directory of Open Access Journals (Sweden)

    Prinya Putthapiban

    2012-09-01

    Full Text Available Aerial photographic maps and landsat image interpretations suggest the major fault segments of the Three PagodaFault (TPF Zone and Sri Swat Fault (SSF Zone are oriented parallel or sub-parallel in the same NW-SE directions. The KwaeNoi River is running along the TPF in the south whereas the Kwae Yai River is running along the SSF in the north. Thesoutheastern continuation of both faults is obscured by thick Cenozoic sediments. Hence, surface lineaments cannot betraced with confidence. However, based on some interpretations of the airborne magnetic survey data, the trace of such faultsare designated to run through the western part of Bangkok and the northern end of the Gulf of Thailand. Paleo-earthquakesand the presence of hot springs along the fault zones indicate that they are tectonically active. The changes of both physicaland chemical properties of the water from Hin Dart Hot Spring and those of the surface water from a shallow well at Ban KhaoLao during the Great Sumatra–Andaman Earthquake on 26th of December 2004 clearly indicated that the southeastern continuation of the TPF is at least as far south as Pak Tho District, Ratburi. Our new evidence of the alignment of the high heatflow in the upper part of the Gulf of Thailand verified that the TPF also extend into the Gulf via Samut Songkhram Province.Studies of the seismic data from two survey lines along the Western part of the upper Gulf of Thailand acquired by BritoilPlc. in 1986, namely Line A which is approximately 60 km long, starting from Bang Khen passing through Bang Khae andending in Samut Songkhram and Line B is approximately 30 km long starting from Samut Sakon ending in Samut Song Khramsuggest that all the faults or fractures along these seismic profiles are covered by sediments of approximately 230 m thickwhich explain that the fault underneath these seismic lines is quite old and may not be active. The absent of sign or trace ofthe TPF Path to the west suggested that there

  12. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  13. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  14. A Study on the Dependable and Secure Relaying Scheme under High Resistance Earth Faults on HV, EHV Line

    Energy Technology Data Exchange (ETDEWEB)

    Kim, I.D.; Han, K.N. [Korea Electric Power Research Institute, Taejeon (Korea, Republic of)

    1997-12-31

    This report contains following items for the purpose of investigating and analyzing characteristics of high impedance ground faults. - Reason and characteristics identification of HIF - Modeling of power system - Testing of protective relays using RTD(Real Time Digital Simulator) - Staged ground faults test - Development of new algorithm to detect HIF - Protective coordination schemes between different types of relays - HIF monitoring and relaying scheme and H/W prototyping. (author). 22 refs., 28 figs., 21 tabs.

  15. Study on seismic hazard assessment of large active fault systems. Evolution of fault systems and associated geomorphic structures: fault model test and field survey

    International Nuclear Information System (INIS)

    Ueta, Keichi; Inoue, Daiei; Miyakoshi, Katsuyoshi; Miyagawa, Kimio; Miura, Daisuke

    2003-01-01

    Sandbox experiments and field surveys were performed to investigate fault system evolution and fault-related deformation of ground surface, the Quaternary deposits and rocks. The summary of the results is shown below. 1) In the case of strike-slip faulting, the basic fault sequence runs from early en echelon faults and pressure ridges through linear trough. The fault systems associated with the 2000 western Tottori earthquake are shown as en echelon pattern that characterize the early stage of wrench tectonics, therefore no thoroughgoing surface faulting was found above the rupture as defined by the main shock and aftershocks. 2) Low-angle and high-angle reverse faults commonly migrate basinward with time, respectively. With increasing normal fault displacement in bedrock, normal fault develops within range after reverse fault has formed along range front. 3) Horizontal distance of surface rupture from the bedrock fault normalized by the height of the Quaternary deposits agrees well with those of model tests. 4) Upward-widening damage zone, where secondary fractures develop, forms in the handing wall side of high-angle reverse fault at the Kamioka mine. (author)

  16. Dynamics of intraoceanic subduction initiation : 1. Oceanic detachment fault inversion and the formation of supra-subduction zone ophiolites

    NARCIS (Netherlands)

    Maffione, Marco; Thieulot, Cedric; van Hinsbergen, Douwe J.J.; Morris, Antony; Plümper, Oliver; Spakman, Wim

    Subduction initiation is a critical link in the plate tectonic cycle. Intraoceanic subduction zones can form along transform faults and fracture zones, but how subduction nucleates parallel to mid-ocean ridges, as in e.g., the Neotethys Ocean during the Jurassic, remains a matter of debate. In

  17. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  18. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  19. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  20. Non-Cartesian Parallel Imaging Reconstruction of Undersampled IDEAL Spiral 13C CSI Data

    DEFF Research Database (Denmark)

    Hansen, Rie Beck; Hanson, Lars G.; Ardenkjær-Larsen, Jan Henrik

    scan times based on spatial information inherent to each coil element. In this work, we explored the combination of non-cartesian parallel imaging reconstruction and spatially undersampled IDEAL spiral CSI1 acquisition for efficient encoding of multiple chemical shifts within a large FOV with high...

  1. Geomechanical Modeling for Improved CO2 Storage Security

    Science.gov (United States)

    Rutqvist, J.; Rinaldi, A. P.; Cappa, F.; Jeanne, P.; Mazzoldi, A.; Urpi, L.; Vilarrasa, V.; Guglielmi, Y.

    2017-12-01

    This presentation summarizes recent modeling studies on geomechanical aspects related to Geologic Carbon Sequestration (GCS,) including modeling potential fault reactivation, seismicity and CO2 leakage. The model simulations demonstrates that the potential for fault reactivation and the resulting seismic magnitude as well as the potential for creating a leakage path through overburden sealing layers (caprock) depends on a number of parameters such as fault orientation, stress field, and rock properties. The model simulations further demonstrate that seismic events large enough to be felt by humans requires brittle fault properties as well as continuous fault permeability allowing for the pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicity and also effectively impede upward CO2 leakage. Site specific model simulations of the In Salah CO2 storage site showed that deep fractured zone responses and associated seismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. It is suggested that coupled geomechanical modeling be used to guide the site selection and assisting in identification of locations most prone to unwanted and damaging geomechanical changes, and to evaluate potential consequence of such unwanted geomechanical changes. The geomechanical modeling can be used to better estimate the maximum sustainable injection rate or reservoir pressure and thereby provide for improved CO2 storage security. Whether damaging geomechanical changes could actually occur very much depends on the local stress field and local reservoir properties such the presence of ductile rock and faults (which can aseismically accommodate for the stress and strain induced by

  2. Improved security detection strategy in quantum secure direct communication protocol based on four-particle Green-Horne-Zeilinger state

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jian; Nie, Jin-Rui; Li, Rui-Fan [Beijing Univ. of Posts and Telecommunications, Beijing (China). School of Computer; Jing, Bo [Beijing Univ. of Posts and Telecommunications, Beijing (China). School of Computer; Beijing Institute of Applied Meteorology, Beijing (China). Dept. of Computer Science

    2012-06-15

    To enhance the efficiency of eavesdropping detection in the quantum secure direct communication protocol, an improved quantum secure direct communication protocol based on a four-particle Green-Horne-Zeilinger (GHZ) state is presented. In the protocol, the four-particle GHZ state is used to detect eavesdroppers, and quantum dense coding is used to encode the message. In the security analysis, the method of entropy theory is introduced, and two detection strategies are compared quantitatively by using the constraint between the information that the eavesdroppers can obtain and the interference that has been introduced. If the eavesdropper wants to obtain all the information, the detection rate of the quantum secure direct communication using an Einstein-Podolsky-Rosen (EPR) pair block will be 50% and the detection rate of the presented protocol will be 87%. At last, the security of the proposed protocol is discussed. The analysis results indicate that the protocol proposed is more secure than the others. (orig.)

  3. Cryptanalysis on a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Guo Wei; Wang Xiaoming; He Dake; Cao Yang

    2009-01-01

    This Letter analyzes the security of a novel parallel keyed hash function based on chaotic maps, proposed by Xiao et al. to improve the efficiency in parallel computing environment. We show how to devise forgery attacks on Xiao's scheme with differential cryptanalysis and give the experiment results of two kinds of forgery attacks firstly. Furthermore, we discuss the problem of weak keys in the scheme and demonstrate how to utilize weak keys to construct collision.

  4. CRISP. Simulation tool for fault detection and diagnostics in high-DG power networks

    International Nuclear Information System (INIS)

    Fontela, M.; Andrieu, C.; Raison, B.

    2004-08-01

    This document gives a description of a tool proposed for fault detection and diagnostics. The main principles of the functions of fault localization are described and detailed for a given MV network that will be used for the ICT experiment in Grenoble (experiment 3B). The aim of the tool is to create a technical, simple and realistic context for testing ICT dedicated to an electrical application. The tool gives the expected inputs and outputs contents of the various distributed ICT components when a fault occurs in a given MV network. So the requirements for the ICT components are given in term of expected data collected, analysed and transmitted. Several examples are given in order to illustrate the inputs/outputs in case of different faults. The tool includes a topology description which is a main aspect to develop in the future for managing the distribution network. Updating topology in real time will become necessary for fault diagnostic and protection, but also necessary for the various possible added applications (local market balance and local electrical power quality for instance). The tool gives a context and a simple view for the ICT components behaviours assuming an ideal response and transmission from them. The real characteristics and possible limitations for the ICT (information latency, congestion, security) will be established during the experiments from the same context described in the HTFD tool

  5. Composite faults in the Swiss Alps formed by the interplay of tectonics, gravitation and postglacial rebound: an integrated field and modelling study

    International Nuclear Information System (INIS)

    Ustaszewski, M. E.; Pfiffner, A.; Hampel, A.; Ustaszewski, M. E.

    2008-01-01

    Along the flanks of several valleys in the Swiss Alps, well-preserved fault scarps occur between 1900 and 2400 m altitude, which reveal uplift of the valley-side block relative to the mountain-side block. The height of these uphill-facing scarps varies between 0.5 m and more than 10 m along strike of the fault traces, which usually trend parallel to the valley axes. The formation of the scarps is generally attributed either to tectonic movements or gravitational slope instabilities. Here we combine field data and numerical experiments to show that the scarps may be of composite origin, i.e. that tectonic and gravitational processes as well as postglacial differential uplift may have contributed to their formation. Tectonic displacement may occur as the fault scarps run parallel to older tectonic faults. The tectonic component seems, however, to be minor as the studied valleys lack seismic activity. A large gravitational component, which is feasible owing to the steep dip of the schistosity and lithologic boundaries in the studied valleys, is indicated by the uneven morphology of the scarps, which is typical of slope movements. Postglacial differential uplift of the valley floor with respect to the summits provides a third feasible mechanism for scarp formation, as the scarps are postglacial in age and occur on the flanks of valleys that were filled with ice during the last glacial maximum. Finite-element experiments show that postglacial unloading and rebound can initiate slip on steeply dipping pre-existing weak zones and explain part of the observed scarp height. From our field and modelling results we conclude that the formation of uphill-facing scarps is primarily promoted by a steeply dipping schistosity striking parallel to the valley axes and, in addition, by mechanically weaker rocks in the valley with respect to the summits. Our findings imply that the identification of surface expressions related to active faults can be hindered by similar morphologic

  6. Nuclear security: Then and now

    International Nuclear Information System (INIS)

    Weinstein, A.A.

    1992-01-01

    The evolution of computerized security systems at nuclear power plants has been driven by both the enhancements in computer technology and the changes in regulatory requirements over time. Technical advancements have simplified the essential nature of these systems in both real-time and data processing operations. Regulatory developments have caused a similar trend in simplification. This article addresses the computer and data acquisition portions of a security system and not the access control hardware, intrusion detection sensors, or surveillance equipment, other than to indicate how functional improvements in these areas have been achieved as systems have developed. The state of technology today includes the availability of fault-tolerant computers, the practice of networking multiple computers, and the standardization of real-time data network communications. These factors make two things possible in a plant security system. One is distributed processing, with rapid alarm annunciation (less than 1 second), essentially immediate response to access requests (less than 1 second), and an expeditious and comprehensive reporting capability. The other is permitting different plant operations (security, radiation protection, operator tours) to achieve economies by sharing the same network while using independent computers and avoiding operational conflicts

  7. An Efficient and Secure Arbitrary N-Party Quantum Key Agreement Protocol Using Bell States

    Science.gov (United States)

    Liu, Wen-Jie; Xu, Yong; Yang, Ching-Nung; Gao, Pei-Pei; Yu, Wen-Bin

    2018-01-01

    Two quantum key agreement protocols using Bell states and Bell measurement were recently proposed by Shukla et al. (Quantum Inf. Process. 13(11), 2391-2405, 2014). However, Zhu et al. pointed out that there are some security flaws and proposed an improved version (Quantum Inf. Process. 14(11), 4245-4254, 2015). In this study, we will show Zhu et al.'s improvement still exists some security problems, and its efficiency is not high enough. For solving these problems, we utilize four Pauli operations { I, Z, X, Y} to encode two bits instead of the original two operations { I, X} to encode one bit, and then propose an efficient and secure arbitrary N-party quantum key agreement protocol. In the protocol, the channel checking with decoy single photons is introduced to avoid the eavesdropper's flip attack, and a post-measurement mechanism is used to prevent against the collusion attack. The security analysis shows the present protocol can guarantee the correctness, security, privacy and fairness of quantum key agreement.

  8. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  9. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

  10. Decision Optimization for Power Grid Operating Conditions with High- and Low-Voltage Parallel Loops

    Directory of Open Access Journals (Sweden)

    Dong Yang

    2017-05-01

    Full Text Available With the development of higher-voltage power grids, the high- and low-voltage parallel loops are emerging, which lead to energy losses and even threaten the security and stability of power systems. The multi-infeed high-voltage direct current (HVDC configurations widely appearing in AC/DC interconnected power systems make this situation even worse. Aimed at energy saving and system security, a decision optimization method for power grid operating conditions with high- and low-voltage parallel loops is proposed in this paper. Firstly, considering hub substation distribution and power grid structure, parallel loop opening schemes are generated with GN (Girvan-Newman algorithms. Then, candidate opening schemes are preliminarily selected from all these generated schemes based on a filtering index. Finally, with the influence on power system security, stability and operation economy in consideration, an evaluation model for candidate opening schemes is founded based on analytic hierarchy process (AHP. And a fuzzy evaluation algorithm is used to find the optimal scheme. Simulation results of a New England 39-bus system and an actual power system validate the effectiveness and superiority of this proposed method.

  11. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  12. Single-Shot MR Spectroscopic Imaging with Partial Parallel Imaging

    Science.gov (United States)

    Posse, Stefan; Otazo, Ricardo; Tsai, Shang-Yueh; Yoshimoto, Akio Ernesto; Lin, Fa-Hsuan

    2010-01-01

    An MR spectroscopic imaging (MRSI) pulse sequence based on Proton-Echo-Planar-Spectroscopic-Imaging (PEPSI) is introduced that measures 2-dimensional metabolite maps in a single excitation. Echo-planar spatial-spectral encoding was combined with interleaved phase encoding and parallel imaging using SENSE to reconstruct absorption mode spectra. The symmetrical k-space trajectory compensates phase errors due to convolution of spatial and spectral encoding. Single-shot MRSI at short TE was evaluated in phantoms and in vivo on a 3 T whole body scanner equipped with 12-channel array coil. Four-step interleaved phase encoding and 4-fold SENSE acceleration were used to encode a 16×16 spatial matrix with 390 Hz spectral width. Comparison with conventional PEPSI and PEPSI with 4-fold SENSE acceleration demonstrated comparable sensitivity per unit time when taking into account g-factor related noise increases and differences in sampling efficiency. LCModel fitting enabled quantification of Inositol, Choline, Creatine and NAA in vivo with concentration values in the ranges measured with conventional PEPSI and SENSE-accelerated PEPSI. Cramer-Rao lower bounds were comparable to those obtained with conventional SENSE-accelerated PEPSI at the same voxel size and measurement time. This single-shot MRSI method is therefore suitable for applications that require high temporal resolution to monitor temporal dynamics or to reduce sensitivity to tissue movement. PMID:19097245

  13. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    Science.gov (United States)

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  14. Data-driven design of fault diagnosis and fault-tolerant control systems

    CERN Document Server

    Ding, Steven X

    2014-01-01

    Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems presents basic statistical process monitoring, fault diagnosis, and control methods, and introduces advanced data-driven schemes for the design of fault diagnosis and fault-tolerant control systems catering to the needs of dynamic industrial processes. With ever increasing demands for reliability, availability and safety in technical processes and assets, process monitoring and fault-tolerance have become important issues surrounding the design of automatic control systems. This text shows the reader how, thanks to the rapid development of information technology, key techniques of data-driven and statistical process monitoring and control can now become widely used in industrial practice to address these issues. To allow for self-contained study and facilitate implementation in real applications, important mathematical and control theoretical knowledge and tools are included in this book. Major schemes are presented in algorithm form and...

  15. Controlled and secure direct communication using GHZ state and teleportation

    International Nuclear Information System (INIS)

    Gao, T.

    2004-01-01

    A theoretical scheme for controlled and secure direct communication is proposed. The communication is based on GHZ state and controlled quantum teleportation. After insuring the security of the quantum channel (a set of qubits in the GHZ state), alice encodes the secret message directly on a sequence of particle states in the GHZ state and transmits them to Bob, supervised by Charlie using controlled quantum teleportation. Bob can read out the encoded messages directly by the measurement on his qubits. In this scheme, the controlled quantum teleportation transmits alice's message without revealing any information to a potential eavesdropper. Because there is not a transmission of the qubit carrying the secret messages between Alice and Bob in the public channel, it is completely secure for controlled and direct secret communication if a perfect quantum channel is used. The feature of this scheme is that the communication between two sides depends on the agreement of a third side. (orig.)

  16. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    Science.gov (United States)

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  17. Place field assembly distribution encodes preferred locations.

    Directory of Open Access Journals (Sweden)

    Omar Mamad

    2017-09-01

    Full Text Available The hippocampus is the main locus of episodic memory formation and the neurons there encode the spatial map of the environment. Hippocampal place cells represent location, but their role in the learning of preferential location remains unclear. The hippocampus may encode locations independently from the stimuli and events that are associated with these locations. We have discovered a unique population code for the experience-dependent value of the context. The degree of reward-driven navigation preference highly correlates with the spatial distribution of the place fields recorded in the CA1 region of the hippocampus. We show place field clustering towards rewarded locations. Optogenetic manipulation of the ventral tegmental area demonstrates that the experience-dependent place field assembly distribution is directed by tegmental dopaminergic activity. The ability of the place cells to remap parallels the acquisition of reward context. Our findings present key evidence that the hippocampal neurons are not merely mapping the static environment but also store the concurrent context reward value, enabling episodic memory for past experience to support future adaptive behavior.

  18. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2010-02-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  19. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  20. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  1. Pulse-Like Rupture Induced by Three-Dimensional Fault Zone Flower Structures

    KAUST Repository

    Pelties, Christian

    2014-07-04

    © 2014, Springer Basel. Faults are often embedded in low-velocity fault zones (LVFZ) caused by material damage. Previous 2D dynamic rupture simulations (Huang and Ampuero, 2011; Huang et al., 2014) showed that if the wave velocity contrast between the LVFZ and the country rock is strong enough, ruptures can behave as pulses, i.e. with local slip duration (rise time) much shorter than whole rupture duration. Local slip arrest (healing) is generated by waves reflected from the LVFZ–country rock interface. This effect is robust against a wide range of fault zone widths, absence of frictional healing, variation of initial stress conditions, attenuation, and off-fault plasticity. These numerical studies covered two-dimensional problems with fault-parallel fault zone structures. Here, we extend previous work to 3D and geometries that are more typical of natural fault zones, including complexities such as flower structures with depth-dependent velocity and thickness, and limited fault zone depth extent. This investigation requires high resolution and flexible mesh generation, which are enabled here by the high-order accurate arbitrary high-order derivatives discontinuous Galerkin method with an unstructured tetrahedral element discretization (Peltieset al., 2012). We show that the healing mechanism induced by waves reflected in the LVFZ also operates efficiently in such three-dimensional fault zone structures and that, in addition, a new healing mechanism is induced by unloading waves generated when the rupture reaches the surface. The first mechanism leads to very short rise time controlled by the LVFZ width to wave speed ratio. The second mechanism leads to generally longer, depth-increasing rise times, is also conditioned by the existence of an LVFZ, and persists at some depth below the bottom of the LVFZ. Our simulations show that the generation of slip pulses by these two mechanisms is robust to the depth extent of the LVFZ and to the position of the hypocenter

  2. Multiparty Quantum Secret Sharing of Secure Direct Communication Using Teleportation

    International Nuclear Information System (INIS)

    Wang Jian; Zhang Quan; Tang Chaojing

    2007-01-01

    We present an (n,n) threshold quantum secret sharing scheme of secure direct communication using Greenberger-Horne-Zeilinger state and teleportation. After ensuring the security of the quantum channel, the sender encodes the secret message directly on a sequence of particle states and transmits it to the receivers by teleportation. The receivers can recover the secret message by combining their measurement results with the sender's result. If a perfect quantum channel is used, our scheme is completely secure because the transmitting particle sequence does not carry the secret message. We also show our scheme is secure for noise quantum channel.

  3. Multi-type Tectonic Responses to Plate Motion Changes of Mega-Offset Transform Faults at the Pacific-Antarctic Ridge

    Science.gov (United States)

    Zhang, F.; Lin, J.; Yang, H.; Zhou, Z.

    2017-12-01

    Magmatic and tectonic responses of a mid-ocean ridge system to plate motion changes can provide important constraints on the mechanisms of ridge-transform interaction and lithospheric properties. Here we present new analysis of multi-type responses of the mega-offset transform faults at the Pacific-Antarctic Ridge (PAR) system to plate motion changes in the last 12 Ma. Detailed analysis of the Heezen, Tharp, and Udintsev transform faults showed that the extensional stresses induced by plate motion changes could have been released through a combination of magmatic and tectonic processes: (1) For a number of ridge segments with abundant magma supply, plate motion changes might have caused the lateral transport of magma along the ridge axis and into the abutting transform valley, forming curved "hook" ridges at the ridge-transform intersection. (2) Plate motion changes might also have caused vertical deformation on steeply-dipping transtensional faults that were developed along the Heezen, Tharp, and Udintsev transform faults. (3) Distinct zones of intensive tectonic deformation, resembling belts of "rift zones", were found to be sub-parallel to the investigated transform faults. These rift-like deformation zones were hypothesized to have developed when the stresses required to drive the vertical deformation on the steeply-dipping transtensional faults along the transform faults becomes excessive, and thus deformation on off-transform "rift zones" became favored. (4) However, to explain the observed large offsets on the steeply-dipping transtensional faults, the transform faults must be relatively weak with low apparent friction coefficient comparing to the adjacent lithospheric plates.

  4. Method for making an improved magnetic encoding device

    Science.gov (United States)

    Fox, Richard J.

    1981-01-01

    A magnetic encoding device and method for making the same are provided for use as magnetic storage mediums in identification control applications which give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications.

  5. A Design Method for Fault Reconfiguration and Fault-Tolerant Control of a Servo Motor

    Directory of Open Access Journals (Sweden)

    Jing He

    2013-01-01

    Full Text Available A design scheme that integrates fault reconfiguration and fault-tolerant position control is proposed for a nonlinear servo system with friction. Analysis of the non-linear friction torque and fault in the system is used to guide design of a sliding mode position controller. A sliding mode observer is designed to achieve fault reconfiguration based on the equivalence principle. Thus, active fault-tolerant position control of the system can be realized. A real-time simulation experiment is performed on a hardware-in-loop simulation platform. The results show that the system reconfigures well for both incipient and abrupt faults. Under the fault-tolerant control mechanism, the output signal for the system position can rapidly track given values without being influenced by faults.

  6. Active Fault Isolation in MIMO Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    isolation is based directly on the input/output s ignals applied for the fault detection. It is guaranteed that the fault group includes the fault that had occurred in the system. The second step is individual fault isolation in the fault group . Both types of isolation are obtained by applying dedicated......Active fault isolation of parametric faults in closed-loop MIMO system s are considered in this paper. The fault isolation consists of two steps. T he first step is group- wise fault isolation. Here, a group of faults is isolated from other pos sible faults in the system. The group-wise fault...

  7. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  8. Dynamic rupture simulation of the 2017 Mw 7.8 Kaikoura (New Zealand) earthquake: Is spontaneous multi-fault rupture expected?

    Science.gov (United States)

    Ando, R.; Kaneko, Y.

    2017-12-01

    The coseismic rupture of the 2016 Kaikoura earthquake propagated over the distance of 150 km along the NE-SW striking fault system in the northern South Island of New Zealand. The analysis of In-SAR, GPS and field observations (Hamling et al., 2017) revealed that the most of the rupture occurred along the previously mapped active faults, involving more than seven major fault segments. These fault segments, mostly dipping to northwest, are distributed in a quite complex manner, manifested by fault branching and step-over structures. Back-projection rupture imaging shows that the rupture appears to jump between three sub-parallel fault segments in sequence from the south to north (Kaiser et al., 2017). The rupture seems to be terminated on the Needles fault in Cook Strait. One of the main questions is whether this multi-fault rupture can be naturally explained with the physical basis. In order to understand the conditions responsible for the complex rupture process, we conduct fully dynamic rupture simulations that account for 3-D non-planar fault geometry embedded in an elastic half-space. The fault geometry is constrained by previous In-SAR observations and geological inferences. The regional stress field is constrained by the result of stress tensor inversion based on focal mechanisms (Balfour et al., 2005). The fault is governed by a relatively simple, slip-weakening friction law. For simplicity, the frictional parameters are uniformly distributed as there is no direct estimate of them except for a shallow portion of the Kekerengu fault (Kaneko et al., 2017). Our simulations show that the rupture can indeed propagate through the complex fault system once it is nucleated at the southernmost segment. The simulated slip distribution is quite heterogeneous, reflecting the nature of non-planar fault geometry, fault branching and step-over structures. We find that optimally oriented faults exhibit larger slip, which is consistent with the slip model of Hamling et al

  9. Single-shot imaging with higher-dimensional encoding using magnetic field monitoring and concomitant field correction.

    Science.gov (United States)

    Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2015-03-01

    PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.

  10. Diffractive generalized phase contrast for adaptive phase imaging and optical security

    DEFF Research Database (Denmark)

    Palima, Darwin; Glückstad, Jesper

    2012-01-01

    We analyze the properties of Generalized Phase Contrast (GPC) when the input phase modulation is implemented using diffractive gratings. In GPC applications for patterned illumination, the use of a dynamic diffractive optical element for encoding the GPC input phase allows for onthe- fly optimiza...... security applications and can be used to create phasebased information channels for enhanced information security....

  11. Guaranteed Cost Fault-Tolerant Control for Networked Control Systems with Sensor Faults

    Directory of Open Access Journals (Sweden)

    Qixin Zhu

    2015-01-01

    Full Text Available For the large scale and complicated structure of networked control systems, time-varying sensor faults could inevitably occur when the system works in a poor environment. Guaranteed cost fault-tolerant controller for the new networked control systems with time-varying sensor faults is designed in this paper. Based on time delay of the network transmission environment, the networked control systems with sensor faults are modeled as a discrete-time system with uncertain parameters. And the model of networked control systems is related to the boundary values of the sensor faults. Moreover, using Lyapunov stability theory and linear matrix inequalities (LMI approach, the guaranteed cost fault-tolerant controller is verified to render such networked control systems asymptotically stable. Finally, simulations are included to demonstrate the theoretical results.

  12. Testing Pixel Translation Digital Elevation Models to Reconstruct Slip Histories: An Example from the Agua Blanca Fault, Baja California, Mexico

    Science.gov (United States)

    Wilson, J.; Wetmore, P. H.; Malservisi, R.; Ferwerda, B. P.; Teran, O.

    2012-12-01

    approximately equal to that to the east. The ABF has varying kinematics along strike due to changes in trend of the fault with respect to the nearly east-trending displacement vector of the Ensenada Block to the north of the fault relative to a stable Baja Microplate to the south. These kinematics include nearly pure strike slip in the central portion of the ABF where the fault trends nearly E-W, and minor components of normal dip-slip motion on the NABF and eastern sections of the fault where the trends become more northerly. A pixel translation vector parallel to the trend of the ABF in the central segment (290 deg, 10.5 km) produces kinematics consistent with those described above. The block between the NABF and STF has a pixel translation vector parallel the STF (291 deg, 3.5 km). We find these vectors are consistent with the kinematic variability of the fault system and realign several major drainages and ridges across the fault. This suggests these features formed prior to faulting, and they yield preferred values of offset: 10.5 km on the ABF, 7 km on the NABF and 3.5 km on the STF. This model is consistent with the kinematic model proposed by Hamilton (1971) in which the ABF is a transform fault, linking extensional regions of Valle San Felipe and the Continental Borderlands.

  13. Thermal-hydraulic modeling of deaerator and fault detection and diagnosis of measurement sensor

    International Nuclear Information System (INIS)

    Lee, Jung Woon; Park, Jae Chang; Kim, Jung Taek; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In

    2003-05-01

    It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the deaerator and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm using Adaptive Estimator(AE) algorithm. The other is input-output model-based FDD algorithm using ART neural network. Extensive computer simulations for the real data obtained from Younggwang 3 and 4 FSAR are carried out to evaluate the performance in terms of speed and accuracy

  14. Exploring Hardware-Based Primitives to Enhance Parallel Security Monitoring in a Novel Computing Architecture

    National Research Council Canada - National Science Library

    Mott, Stephen

    2007-01-01

    This research explores how hardware-based primitives can be implemented to perform security-related monitoring in real-time, offer better security, and increase performance compared to software-based approaches...

  15. Parallel iterative decoding of transform domain Wyner-Ziv video using cross bitplane correlation

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    decoding scheme is proposed to improve the coding efficiency of TDWZ video codecs. The proposed parallel iterative LDPC decoding scheme is able to utilize cross bitplane correlation during decoding, by iteratively refining the soft-input, updating a modeled noise distribution and thereafter enhancing......In recent years, Transform Domain Wyner-Ziv (TDWZ) video coding has been proposed as an efficient Distributed Video Coding (DVC) solution, which fully or partly exploits the source statistics at the decoder to reduce the computational burden at the encoder. In this paper, a parallel iterative LDPC...

  16. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    Science.gov (United States)

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  17. Vipava fault (Slovenia

    Directory of Open Access Journals (Sweden)

    Ladislav Placer

    2008-06-01

    Full Text Available During mapping of the already accomplished Razdrto – Senožeče section of motorway and geologic surveying of construction operations of the trunk road between Razdrto and Vipava in northwestern part of External Dinarides on the southwestern slope of Mt. Nanos, called Rebrnice, a steep NW-SE striking fault was recognized, situated between the Predjama and the Ra{a faults. The fault was named Vipava fault after the Vipava town. An analysis of subrecent gravitational slips at Rebrnice indicates that they were probably associated with the activity of this fault. Unpublished results of a repeated levelling line along the regional road passing across the Vipava fault zone suggest its possible present activity. It would be meaningful to verify this by appropriate geodetic measurements, and to study the actual gravitational slips at Rebrnice. The association between tectonics and gravitational slips in this and in similar extreme cases in the areas of Alps and Dinarides points at the need of complex studying of geologic proceses.

  18. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun

    1994-02-01

    In this work, the Fuzzy Signed Digraph(FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators

  19. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    1994-01-01

    In this work, the Fuzzy Signed Digraph (FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators. (Author)

  20. Typing and Compositionality for Security Protocols: A Generalization to the Geometric Fragment (Extended Version)

    DEFF Research Database (Denmark)

    Almousa, Omar; Mödersheim, Sebastian Alexander; Modesti, Paolo

    We integrate, and improve upon, prior relative soundness results of two kinds. The first kind are typing results showing that if any security protocol that fulfils a number of sufficient conditions has an attack then it has a well-typed attack. The second kind considers the parallel composition o...... of protocols, showing that when running two protocols in parallel allows for an attack, then at least one of the protocols has an attack in isolation. The most important generalization over previous work is the support for all security properties of the geometric fragment.......We integrate, and improve upon, prior relative soundness results of two kinds. The first kind are typing results showing that if any security protocol that fulfils a number of sufficient conditions has an attack then it has a well-typed attack. The second kind considers the parallel composition...

  1. Sensorless Control of Late-Stage Offshore DFIG-WT with FSTP Converters by Using EKF to Ride through Hybrid Faults

    Directory of Open Access Journals (Sweden)

    Wei Li

    2017-11-01

    Full Text Available A hybrid fault scenario in a late-stage offshore doubly-fed induction generator (DFIG-based wind turbine (DFIG-WT with converter open-circuit fault and position sensor failure is investigated in this paper. An extended Kalman filter (EKF-based sensorless control strategy is utilized to eliminate the encoder. Based on the detailed analysis of the seventh-order dynamic state space model of DFIG, along with the input voltage signals and measured current signals, the EKF algorithm for DFIG is designed to estimate the rotor speed and position. In addition, the bridge arm open circuit in the back-to-back (BTB power converter of DFIG is taken as a commonly-encountered fault due to the fragility of semiconductor switches. Four-switch three-phase (FSTP topology-based fault-tolerant converters are employed for post-fault operation by considering the minimization of switching losses and reducing the circuit complexity. Moreover, a simplified space vector pulse width modulation (SVPWM technique is proposed to reduce the computational burden, and a voltage balancing scheme is put forward to increase the DC-bus voltage utilization rate. Simulation studies are carried out in MATLAB/Simulink2017a (MathWorks, Natick, MA, USA to demonstrate the validity of the proposed hybrid fault-tolerant strategy for DFIG-WT, with the wind speed fluctuation, measurement noises and grid voltage sag taken into consideration.

  2. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  3. Optimal design of superconducting fault detector for superconductor triggered fault current limiters

    International Nuclear Information System (INIS)

    Yim, S.-W.; Kim, H.-R.; Hyun, O.-B.; Sim, J.; Park, K.B.; Lee, B.W.

    2008-01-01

    We have designed and tested a superconducting fault detector (SFD) for a 22.9 kV superconductor triggered fault current limiters (STFCLs) using Au/YBCO thin films. The SFD is to detect a fault and commutate the current from the primary path to the secondary path of the STFCL. First, quench characteristics of the Au/YBCO thin films were investigated for various faults having different fault duration. The rated voltage of the Au/YBCO thin films was determined from the results, considering the stability of the Au/YBCO elements. Second, the recovery time to superconductivity after quench was measured in each fault case. In addition, the dependence of the recovery characteristics on numbers and dimension of Au/YBCO elements were investigated. Based on the results, a SFD was designed, fabricated and tested. The SFD successfully detected a fault current and carried out the line commutation. Its recovery time was confirmed to be less than 0.5 s, satisfying the reclosing scheme in the Korea Electric Power Corporation (KEPCO)'s power grid

  4. Off-fault tip splay networks: a genetic and generic property of faults indicative of their long-term propagation, and a major component of off-fault damage

    Science.gov (United States)

    Perrin, C.; Manighetti, I.; Gaudemer, Y.

    2015-12-01

    Faults grow over the long-term by accumulating displacement and lengthening, i.e., propagating laterally. We use fault maps and fault propagation evidences available in literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimeters to thousands of kilometers and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of parent fault length, slip mode, context, etc, tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (~30 and ~10 % of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). Tip splays more commonly develop on one side only of the parent fault. We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. We suggest that they represent the most recent damage off-the parent fault, formed during the most recent phase of fault lengthening. The scaling relation between parent fault length and width of tip splay network implies that damage zones enlarge as parent fault length increases. Elastic properties of host rocks might thus be modified at large distances away from a fault, up to 10% of its length. During an earthquake, a significant fraction of coseismic slip and stress is dissipated into the permanent damage zone that surrounds the causative fault. We infer that coseismic dissipation might occur away from a rupture zone as far as a distance of 10% of the length of its causative fault. Coseismic deformations and stress transfers might thus be significant in broad regions about principal rupture traces. This work has been published in Comptes Rendus Geoscience under doi:10.1016/j.crte.2015.05.002 (http://www.sciencedirect.com/science/article/pii/S1631071315000528).

  5. Passive and partially active fault tolerance for massively parallel stream processing engines

    DEFF Research Database (Denmark)

    Su, Li; Zhou, Yongluan

    2018-01-01

    . On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...... also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness...

  6. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    Science.gov (United States)

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Secure quantum key distribution using squeezed states

    International Nuclear Information System (INIS)

    Gottesman, Daniel; Preskill, John

    2001-01-01

    We prove the security of a quantum key distribution scheme based on transmission of squeezed quantum states of a harmonic oscillator. Our proof employs quantum error-correcting codes that encode a finite-dimensional quantum system in the infinite-dimensional Hilbert space of an oscillator, and protect against errors that shift the canonical variables p and q. If the noise in the quantum channel is weak, squeezing signal states by 2.51 dB (a squeeze factor e r =1.34) is sufficient in principle to ensure the security of a protocol that is suitably enhanced by classical error correction and privacy amplification. Secure key distribution can be achieved over distances comparable to the attenuation length of the quantum channel

  8. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    Full Text Available Recent deformation processes taking place in real time are analyzed on the basis of data on fault zones which were collected by long-term detailed geodetic survey studies with application of field methods and satellite monitoring.A new category of recent crustal movements is described and termed as parametrically induced tectonic strain in fault zones. It is shown that in the fault zones located in seismically active and aseismic regions, super intensive displacements of the crust (5 to 7 cm per year, i.e. (5 to 7·10–5 per year occur due to very small external impacts of natural or technogenic / industrial origin.The spatial discreteness of anomalous deformation processes is established along the strike of the regional Rechitsky fault in the Pripyat basin. It is concluded that recent anomalous activity of the fault zones needs to be taken into account in defining regional regularities of geodynamic processes on the basis of real-time measurements.The paper presents results of analyses of data collected by long-term (20 to 50 years geodetic surveys in highly seismically active regions of Kopetdag, Kamchatka and California. It is evidenced by instrumental geodetic measurements of recent vertical and horizontal displacements in fault zones that deformations are ‘paradoxically’ deviating from the inherited movements of the past geological periods.In terms of the recent geodynamics, the ‘paradoxes’ of high and low strain velocities are related to a reliable empirical fact of the presence of extremely high local velocities of deformations in the fault zones (about 10–5 per year and above, which take place at the background of slow regional deformations which velocities are lower by the order of 2 to 3. Very low average annual velocities of horizontal deformation are recorded in the seismic regions of Kopetdag and Kamchatka and in the San Andreas fault zone; they amount to only 3 to 5 amplitudes of the earth tidal deformations per year.A ‘fault

  9. How Secure Is Your Radiology Department? Mapping Digital Radiology Adoption and Security Worldwide.

    Science.gov (United States)

    Stites, Mark; Pianykh, Oleg S

    2016-04-01

    Despite the long history of digital radiology, one of its most critical aspects--information security--still remains extremely underdeveloped and poorly standardized. To study the current state of radiology security, we explored the worldwide security of medical image archives. Using the DICOM data-transmitting standard, we implemented a highly parallel application to scan the entire World Wide Web of networked computers and devices, locating open and unprotected radiology servers. We used only legal and radiology-compliant tools. Our security-probing application initiated a standard DICOM handshake to remote computer or device addresses, and then assessed their security posture on the basis of handshake replies. The scan discovered a total of 2774 unprotected radiology or DICOM servers worldwide. Of those, 719 were fully open to patient data communications. Geolocation was used to analyze and rank our findings according to country utilization. As a result, we built maps and world ranking of clinical security, suggesting that even the most radiology-advanced countries have hospitals with serious security gaps. Despite more than two decades of active development and implementation, our radiology data still remains insecure. The results provided should be applied to raise awareness and begin an earnest dialogue toward elimination of the problem. The application we designed and the novel scanning approach we developed can be used to identify security breaches and to eliminate them before they are compromised.

  10. Stresses in faulted tunnel models by photoelasticity and adaptive finite element

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Huang, Y.

    1995-01-01

    Research efforts in this area continue to investigate the development of a proper technique to analyze the stresses in the Ghost Dance fault and the effect of the fault on the stability of drifts in the proposed repository. Results from two parallel techniques are being compared to each other - Photoelastic models and Finite Element (FE) models. The Photoelastic plexiglass model (88.89 mm thick ampersand 256.1 mm long and wide) has two adjacent spare openings (57.95 mm long and wide) and a central round opening (57.95 mm diameter) placed at a clear distance approximately equal to its diameter from the square openings. The vertical loading on top of the model is 2269 N (500 lb.). Saw cuts (0.5388 mm wide), representing a fault, are being propagated from the tunnels outward with stress measurements taken at predefined locations, as the saw cuts increase in length. The FE model duplicates exactly the Photoelastic models. The adaptive mesh generation method is used to refine the FE grid at every step of the analysis. This nonlinear interactive computational techniques uses various uses various percent tolerance errors in the convergence of stress values as a measure in ending the iterative process

  11. Active tectonic deformation of the western Indian plate boundary: A case study from the Chaman Fault System

    Science.gov (United States)

    Crupa, Wanda E.; Khan, Shuhab D.; Huang, Jingqiu; Khan, Abdul S.; Kasi, Aimal

    2017-10-01

    Collision of the Eurasian and Indian plates has resulted in two spatially offset subduction zones, the Makran subduction zone to the south and the Himalayan convergent margin to the north. These zones are linked by a system of left-lateral strike-slip faults known as the Chaman Fault System, ∼1200 km, which spans along western Pakistan. Although this is one of the greatest strike-slip faults, yet temporal and spatial variation in displacement has not been adequately defined along this fault system. This study conducted geomorphic and geodetic investigations along the Chaman Fault in a search for evidence of spatial variations in motion. Four study areas were selected over the span of the Chaman Fault: (1) Tarnak-Rud area over the Tarnak-Rud valley, (2) Spinatizha area over the Spinatizha Mountain Range, (3) Nushki area over the Nushki basin, and (4) Kharan area over the northern tip of the Central Makran Mountains. Remote sensing data allowed for in depth mapping of different components and faults within the Kohjak group. Wind and water gap pairs along with offset rivers were identified using high-resolution imagery and digital-elevation models to show displacement for the four study areas. The mountain-front-sinuosity ratio, valley height-to-width-ratio, and the stream-length-gradient index were calculated and used to determine the relative tectonic activity of each area. These geomorphic indices suggest that the Kharan area is the most active and the Tarnak-Rud area is the least active. GPS data were processed into a stable Indian plate reference frame and analyzed. Fault parallel velocity versus fault normal distance yielded a ∼8-10 mm/yr displacement rate along the Chaman Fault just north of the Spinatizha area. InSAR data were also integrated to assess displacement rates along the fault system. Geodetic data support that ultra-slow earthquakes similar to those that strike along other major strike-slip faults, such as the San Andreas Fault System, are

  12. Earthquake geology of the Bulnay Fault (Mongolia)

    Science.gov (United States)

    Rizza, Magali; Ritz, Jean-Franciois; Prentice, Carol S.; Vassallo, Ricardo; Braucher, Regis; Larroque, Christophe; Arzhannikova, A.; Arzhanikov, S.; Mahan, Shannon; Massault, M.; Michelot, J-L.; Todbileg, M.

    2015-01-01

    The Bulnay earthquake of July 23, 1905 (Mw 8.3-8.5), in north-central Mongolia, is one of the world's largest recorded intracontinental earthquakes and one of four great earthquakes that occurred in the region during the 20th century. The 375-km-long surface rupture of the left-lateral, strike-slip, N095°E trending Bulnay Fault associated with this earthquake is remarkable for its pronounced expression across the landscape and for the size of features produced by previous earthquakes. Our field observations suggest that in many areas the width and geometry of the rupture zone is the result of repeated earthquakes; however, in those areas where it is possible to determine that the geomorphic features are the result of the 1905 surface rupture alone, the size of the features produced by this single earthquake are singular in comparison to most other historical strike-slip surface ruptures worldwide. Along the 80 km stretch, between 97.18°E and 98.33°E, the fault zone is characterized by several meters width and the mean left-lateral 1905 offset is 8.9 ± 0.6 m with two measured cumulative offsets that are twice the 1905 slip. These observations suggest that the displacement produced during the penultimate event was similar to the 1905 slip. Morphotectonic analyses carried out at three sites along the eastern part of the Bulnay fault, allow us to estimate a mean horizontal slip rate of 3.1 ± 1.7 mm/yr over the Late Pleistocene-Holocene period. In parallel, paleoseismological investigations show evidence for two earthquakes prior to the 1905 event with recurrence intervals of ~2700-4000 years.

  13. Active Fault-Tolerant Control for Wind Turbine with Simultaneous Actuator and Sensor Faults

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available The purpose of this paper is to show a novel fault-tolerant tracking control (FTC strategy with robust fault estimation and compensating for simultaneous actuator sensor faults. Based on the framework of fault-tolerant control, developing an FTC design method for wind turbines is a challenge and, thus, they can tolerate simultaneous pitch actuator and pitch sensor faults having bounded first time derivatives. The paper’s key contribution is proposing a descriptor sliding mode method, in which for establishing a novel augmented descriptor system, with which we can estimate the state of system and reconstruct fault by designing descriptor sliding mode observer, the paper introduces an auxiliary descriptor state vector composed by a system state vector, actuator fault vector, and sensor fault vector. By the optimized method of LMI, the conditions for stability that estimated error dynamics are set up to promote the determination of the parameters designed. With this estimation, and designing a fault-tolerant controller, the system’s stability can be maintained. The effectiveness of the design strategy is verified by implementing the controller in the National Renewable Energy Laboratory’s 5-MW nonlinear, high-fidelity wind turbine model (FAST and simulating it in MATLAB/Simulink.

  14. The San Andreas Fault and a Strike-slip Fault on Europa

    Science.gov (United States)

    1998-01-01

    The mosaic on the right of the south polar region of Jupiter's moon Europa shows the northern 290 kilometers (180 miles) of a strike-slip fault named Astypalaea Linea. The entire fault is about 810 kilometers (500 miles) long, the size of the California portion of the San Andreas fault on Earth which runs from the California-Mexico border north to the San Francisco Bay. The left mosaic shows the portion of the San Andreas fault near California's san Francisco Bay that has been scaled to the same size and resolution as the Europa image. Each covers an area approximately 170 by 193 kilometers(105 by 120 miles). The red line marks the once active central crack of the Europan fault (right) and the line of the San Andreas fault (left). A strike-slip fault is one in which two crustal blocks move horizontally past one another, similar to two opposing lanes of traffic. The overall motion along the Europan fault seems to have followed a continuous narrow crack along the entire length of the feature, with a path resembling stepson a staircase crossing zones which have been pulled apart. The images show that about 50 kilometers (30 miles) of displacement have taken place along the fault. Opposite sides of the fault can be reconstructed like a puzzle, matching the shape of the sides as well as older individual cracks and ridges that had been broken by its movements. Bends in the Europan fault have allowed the surface to be pulled apart. This pulling-apart along the fault's bends created openings through which warmer, softer ice from below Europa's brittle ice shell surface, or frozen water from a possible subsurface ocean, could reach the surface. This upwelling of material formed large areas of new ice within the boundaries of the original fault. A similar pulling apart phenomenon can be observed in the geological trough surrounding California's Salton Sea, and in Death Valley and the Dead Sea. In those cases, the pulled apart regions can include upwelled materials, but may

  15. MAGMA: A Liquid Software Approach to Fault Tolerance, Computer Network Security, and Survivable Networking

    Science.gov (United States)

    2001-12-01

    and Lieutenant Namik Kaplan , Turkish Navy. Maj Tiefert’s thesis, “Modeling Control Channel Dynamics of SAAM using NS Network Simulation”, helped lay...DEC99] Deconinck , Dr. ir. Geert, Fault Tolerant Systems, ESAT / Division ACCA , Katholieke Universiteit Leuven, October 1999. [FRE00] Freed...Systems”, Addison-Wesley, 1989. [KAP99] Kaplan , Namik, “Prototyping of an Active and Lightweight Router,” March 1999 [KAT99] Kati, Effraim

  16. PEDANT: Parallel Texts in Göteborg

    Directory of Open Access Journals (Sweden)

    Daniel Ridings

    2012-09-01

    Full Text Available

    The article presents the status of the PEDANT project with parallel corpora at the Language Bank at Göteborg University. The solutions for access to the corpus data are presented. Access is provided by way of the internet and standard applications and SGML-aware programming tools. The SGML format for encoding translation pairs is outlined together. The methods allow working with everything from plain text to texts densely encoded with linguistic information.

     

    In hierdie artikel word 'n beskrywing gegee van die stand van die PEDANT-projek met parallelle korpora by die Taalbank by die Universiteit van Göteborg. Oplossings vir die verkryging van toegang tot die korpusdata word aangedui. Toegang word verskaf deur middel van die Internet en standaardtoepassings en SGML-sensitiewe programmeringshulpmiddels. Die SGML-formaat vir die enkodering van vertaalpare word gesamentlik geskets. Hierdie metodes laat toe dat gewerk kan word met enigiets vanaf suiwer teks tot tekste wat taalkundig dig geëtiketteer is.

     

  17. ESR dating of the fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2005-01-01

    We carried out ESR dating of fault rocks collected near the nuclear reactor. The Upcheon fault zone is exposed close to the Ulzin nuclear reactor. The space-time pattern of fault activity on the Upcheon fault deduced from ESR dating of fault gouge can be summarised as follows : this fault zone was reactivated between fault breccia derived from Cretaceous sandstone and tertiary volcanic sedimentary rocks about 2 Ma, 1.5 Ma and 1 Ma ago. After those movements, the Upcheon fault was reactivated between Cretaceous sandstone and fault breccia zone about 800 ka ago. This fault zone was reactivated again between fault breccia derived form Cretaceous sandstone and Tertiary volcanic sedimentary rocks about 650 ka and after 125 ka ago. These data suggest that the long-term(200-500 k.y.) cyclic fault activity of the Upcheon fault zone continued into the Pleistocene. In the Ulzin area, ESR dates from the NW and EW trend faults range from 800 ka to 600 ka NE and EW trend faults were reactivated about between 200 ka and 300 ka ago. On the other hand, ESR date of the NS trend fault is about 400 ka and 50 ka. Results of this research suggest the fault activity near the Ulzin nuclear reactor fault activity continued into the Pleistocene. One ESR date near the Youngkwang nuclear reactor is 200 ka

  18. Quaternary Slip History for the Agua Blanca Fault, northern Baja California, Mexico

    Science.gov (United States)

    Gold, P. O.; Behr, W. M.; Rockwell, T. K.; Fletcher, J. M.

    2017-12-01

    The Agua Blanca Fault (ABF) is the primary structure accommodating San Andreas-related right-lateral slip across the Peninsular Ranges of northern Baja California. Activity on this fault influences offshore faults that parallel the Pacific coast from Ensenada to Los Angeles and is a potential threat to communities in northern Mexico and southern California. We present a detailed Quaternary slip history for the ABF, including new quantitative constraints on geologic slip rates, slip-per-event, the timing of most recent earthquake, and the earthquake recurrence interval. Cosmogenic 10Be exposure dating of clasts from offset fluvial geomorphic surfaces at 2 sites located along the western, and most active, section of the ABF yield preliminary slip rate estimates of 2-4 mm/yr and 3 mm/yr since 20 ka and 2 ka, respectively. Fault zone geomorphology preserved at the younger site provides evidence for right-lateral surface displacements measuring 2.5 m in the past two ruptures. Luminescence dating of an offset alluvial fan at a third site is in progress, but is expected to yield a slip rate relevant to the past 10 kyr. Adjacent to this third site, we excavated 2 paleoseismic trenches across a sag pond formed by a right step in the fault. Preliminary radiocarbon dates indicate that the 4 surface ruptures identified in the trenches occurred in the past 6 kyr, although additional dating should clarify earthquake timing and the mid-Holocene to present earthquake recurrence interval, as well as the likely date of the most recent earthquake. Our new slip rate estimates are somewhat lower than, but comparable within error to, previous geologic estimates based on soil morphology and geodetic estimates from GPS, but the new record of surface ruptures exposed in the trenches is the most complete and comprehensively dated earthquake history yet determined for this fault. Together with new and existing mapping of tectonically generated geomorphology along the ABF, our constraints

  19. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  20. Development of Hydrologic Characterization Technology of Fault Zones: Phase I, 2nd Report

    International Nuclear Information System (INIS)

    Karasaki, Kenzi; Onishi, Tiemi; Black, Bill; Biraud, Sebastien

    2009-01-01

    This is the year-end report of the 2nd year of the NUMO-LBNL collaborative project: Development of Hydrologic Characterization Technology of Fault Zones under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix 3. Literature survey of published information on the relationship between geologic and hydrologic characteristics of faults was conducted. The survey concluded that it may be possible to classify faults by indicators based on various geometric and geologic attributes that may indirectly relate to the hydrologic property of faults. Analysis of existing information on the Wildcat Fault and its surrounding geology was performed. The Wildcat Fault is thought to be a strike-slip fault with a thrust component that runs along the eastern boundary of the Lawrence Berkeley National Laboratory. It is believed to be part of the Hayward Fault system but is considered inactive. Three trenches were excavated at carefully selected locations mainly based on the information from the past investigative work inside the LBNL property. At least one fault was encountered in all three trenches. Detailed trench mapping was conducted by CRIEPI (Central Research Institute for Electric Power Industries) and LBNL scientists. Some intriguing and puzzling discoveries were made that may contradict with the published work in the past. Predictions are made regarding the hydrologic property of the Wildcat Fault based on the analysis of fault structure. Preliminary conceptual models of the Wildcat Fault were proposed. The Wildcat Fault appears to have multiple splays and some low angled faults may be part of the flower structure. In parallel, surface geophysical investigations were conducted using electrical resistivity survey and seismic reflection profiling along three lines on the north and south of the LBNL site. Because of the steep terrain, it was difficult to find optimum locations for survey lines as it is desirable for them to be as

  1. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  2. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    Science.gov (United States)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  3. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    Directory of Open Access Journals (Sweden)

    Mohammad S. Alam

    2012-10-01

    Full Text Available In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level.

  4. Kinematic Analysis of Fault-Slip Data in the Central Range of Papua, Indonesia

    Directory of Open Access Journals (Sweden)

    Benyamin Sapiie

    2016-01-01

    Full Text Available DOI:10.17014/ijog.3.1.1-16Most of the Cenozoic tectonic evolution in New Guinea is a result of obliquely convergent motion that ledto an arc-continent collision between the Australian and Pacific Plates. The Gunung Bijih (Ertsberg Mining District(GBMD is located in the Central Range of Papua, in the western half of the island of New Guinea. This study presentsthe results of detailed structural mapping concentrated on analyzing fault-slip data along a 15-km traverse of theHeavy Equipment Access Trail (HEAT and the Grasberg mine access road, providing new information concerning thedeformation in the GBMD and the Cenozoic structural evolution of the Central Range. Structural analysis indicatesthat two distinct stages of deformation have occurred since ~12 Ma. The first stage generated a series of en-echelonNW-trending (π-fold axis = 300° folds and a few reverse faults. The second stage resulted in a significant left-lateralstrike-slip faulting sub-parallel to the regional strike of upturned bedding. Kinematic analysis reveals that the areasbetween the major strike-slip faults form structural domains that are remarkably uniform in character. The changein deformation styles from contractional to a strike-slip offset is explained as a result from a change in the relativeplate motion between the Pacific and Australian Plates at ~4 Ma. From ~4 - 2 Ma, transform motion along an ~ 270°trend caused a left-lateral strike-slip offset, and reactivated portions of pre-existing reverse faults. This action had aprofound effect on magma emplacement and hydrothermal activity.

  5. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  6. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    Directory of Open Access Journals (Sweden)

    Benjamin C Hitz

    Full Text Available The Encyclopedia of DNA elements (ENCODE project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data has been released as a separate Python package.

  7. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    Science.gov (United States)

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    Science.gov (United States)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  9. Assessment of the geodynamical setting around the main active faults at Aswan area, Egypt

    Science.gov (United States)

    Ali, Radwan; Hosny, Ahmed; Kotb, Ahmed; Khalil, Ahmed; Azza, Abed; Rayan, Ali

    2013-04-01

    The proper evaluation of crustal deformations in the Aswan region especially around the main active faults is crucial due to the existence of one major artificial structure: the Aswan High Dam. This construction created one of the major artificial lakes: Lake Nasser. The Aswan area is considered as an active seismic area in Egypt since many recent and historical felted earthquakes occurred such as the impressive earthquake occurred on November 14, 1981 at Kalabsha fault with a local magnitude ML=5.7. Lately, on 26 December 2011, a moderate earthquake with a local magnitude Ml=4.1 occurred at Kalabsha area too. The main target of this study is to evaluate the active geological structures that can potentially affect the Aswan High Dam and that are being monitored in detail. For implementing this objective, two different geophysical tools (magnetic, seismic) in addition to the Global Positioning System (GPS) have been utilized. Detailed land magnetic survey was carried out for the total component of geomagnetic field using two proton magnetometers. The obtained magnetic results reveal that there are three major faults parallel {F1 (Kalabsha), F2 (Seiyal) and F3} affecting the area. The most dominant magnetic trend strikes those faults in the WNW-ESE direction. The seismicity and fault plain solutions of the 26 December 2011 earthquake and its two aftershocks have been investigated. The source mechanisms of those events delineate two nodal plains. The trending ENE-WSW to E-W is consistent with the direction of Kalabsha fault and its extension towards east for the events located over it. The trending NNW-SSE to N-S is consistent with the N-S fault trending. The movement along the ENE-WSW plain is right lateral, but it is left lateral along the NNW-SSE plain. Based on the estimated relative motions using GPS, dextral strike-slip motion at the Kalabsha and Seiyal fault systems is clearly identified by changing in the velocity gradient between south and north stations

  10. Parallel dispatch: a new paradigm of electrical power system dispatch

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jun Jason; Wang, Fei-Yue; Wang, Qiang; Hao, Dazhi; Yang, Xiaojing; Gao, David Wenzhong; Zhao, Xiangyang; Zhang, Yingchen

    2018-01-01

    Modern power systems are evolving into sociotechnical systems with massive complexity, whose real-time operation and dispatch go beyond human capability. Thus, the need for developing and applying new intelligent power system dispatch tools are of great practical significance. In this paper, we introduce the overall business model of power system dispatch, the top level design approach of an intelligent dispatch system, and the parallel intelligent technology with its dispatch applications. We expect that a new dispatch paradigm, namely the parallel dispatch, can be established by incorporating various intelligent technologies, especially the parallel intelligent technology, to enable secure operation of complex power grids, extend system operators U+02BC capabilities, suggest optimal dispatch strategies, and to provide decision-making recommendations according to power system operational goals.

  11. A dulal-functional medium voltage level DVR to limit downstream fault currents

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Li, Yun Wei; Vilathgamuwa, D. Mahinda

    2007-01-01

    on the other parallel feeders connected to PCC. Furthermore, if not controlled properly, the DVR might also contribute to this PCC voltage sag in the process of compensating the missing voltage, thus further worsening the fault situation. To limit the flow of large line currents, and therefore restore the PCC...... situations. Controlling the DVR as a virtual inductor would also ensure zero real power absorption during the DVR compensation and thus minimize the stress in the dc link. Finally, the proposed fault current limiting algorithm has been tested in Matlab/Simulink simulation and experimentally on a medium......The dynamic voltage restorer (DVR) is a modern custom power device used in power distribution networks to protect consumers from sudden sags (and swells) in grid voltage. Implemented at medium voltage level, the DVR can be used to protect a group of medium voltage or low voltage consumers. However...

  12. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    Science.gov (United States)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  13. Influence of fault steps on rupture termination of strike-slip earthquake faults

    Science.gov (United States)

    Li, Zhengfang; Zhou, Bengang

    2018-03-01

    A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.

  14. Analysis on fault current limiting and recovery characteristics of a flux-lock type SFCL with an isolated transformer

    International Nuclear Information System (INIS)

    Ko, Seckcheol; Lim, Sung-Hun; Han, Tae-Hee

    2013-01-01

    Highlights: ► Countermeasure to reduce the power burden of HTSC element consisting of the flux-lock type SFCL was studied. ► The power burden of HTSC element could be decreased by using the isolated transformer. ► The SFCL designed with the additive polarity winding could be confirmed to cause less power burden of the HTSC element. -- Abstract: The flux-lock type superconducting fault current limiter (SFCL) can quickly limit the fault current shortly after the short circuit occurs and recover the superconducting state after the fault removes. However, the superconducting element comprising the flux-lock type SFCL can be destructed when the high fault current passes through the SFCL. Therefore, the countermeasure to control the fault current and protect the superconducting element is required. In this paper, the flux-lock type SFCL with an isolated transformer, which consists of two parallel connected coils on an iron core and the isolated transformer connected in series with one of two coils, was proposed and the short-circuit experimental device to analyze the fault current limiting and the recovery characteristics of the flux-lock type SFCL with the isolated transformer were constructed. Through the short-circuit tests, the flux-lock type SFCL with the isolated transformer was confirmed to perform more effective fault current limiting and recovery operation compared to the flux-lock type SFCL without the isolated transformer from the viewpoint of the quench occurrence and the recovery time of the SFCL

  15. Single-shot magnetic resonance spectroscopic imaging with partial parallel imaging.

    Science.gov (United States)

    Posse, Stefan; Otazo, Ricardo; Tsai, Shang-Yueh; Yoshimoto, Akio Ernesto; Lin, Fa-Hsuan

    2009-03-01

    A magnetic resonance spectroscopic imaging (MRSI) pulse sequence based on proton-echo-planar-spectroscopic-imaging (PEPSI) is introduced that measures two-dimensional metabolite maps in a single excitation. Echo-planar spatial-spectral encoding was combined with interleaved phase encoding and parallel imaging using SENSE to reconstruct absorption mode spectra. The symmetrical k-space trajectory compensates phase errors due to convolution of spatial and spectral encoding. Single-shot MRSI at short TE was evaluated in phantoms and in vivo on a 3-T whole-body scanner equipped with a 12-channel array coil. Four-step interleaved phase encoding and fourfold SENSE acceleration were used to encode a 16 x 16 spatial matrix with a 390-Hz spectral width. Comparison with conventional PEPSI and PEPSI with fourfold SENSE acceleration demonstrated comparable sensitivity per unit time when taking into account g-factor-related noise increases and differences in sampling efficiency. LCModel fitting enabled quantification of inositol, choline, creatine, and N-acetyl-aspartate (NAA) in vivo with concentration values in the ranges measured with conventional PEPSI and SENSE-accelerated PEPSI. Cramer-Rao lower bounds were comparable to those obtained with conventional SENSE-accelerated PEPSI at the same voxel size and measurement time. This single-shot MRSI method is therefore suitable for applications that require high temporal resolution to monitor temporal dynamics or to reduce sensitivity to tissue movement.

  16. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    Science.gov (United States)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  17. Study of fault diagnosis software design for complex system based on fault tree

    International Nuclear Information System (INIS)

    Yuan Run; Li Yazhou; Wang Jianye; Hu Liqin; Wang Jiaqun; Wu Yican

    2012-01-01

    Complex systems always have high-level reliability and safety requirements, and same does their diagnosis work. As a great deal of fault tree models have been acquired during the design and operation phases, a fault diagnosis method which combines fault tree analysis with knowledge-based technology has been proposed. The prototype of fault diagnosis software has been realized and applied to mobile LIDAR system. (authors)

  18. Designs of Optoelectronic Trinary Signed-Digit Multiplication by use of Joint Spatial Encodings and Optical Correlation

    Science.gov (United States)

    Cherri, Abdallah K.

    1999-02-01

    Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplication designs are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.

  19. An integrated approach to validation of safeguards and security program performance

    International Nuclear Information System (INIS)

    Altman, W.D.; Hunt, J.S.; Hockert, J.W.

    1988-01-01

    Department of Energy (DOE) requirements for safeguards and security programs are becoming increasingly performance oriented. Master Safeguards and Security Agreemtns specify performance levels for systems protecting DOE security interests. In order to measure and validate security system performance, Lawrence Livermore National Laboratory (LLNL) has developed cost effective validation tools and a comprehensive validation approach that synthesizes information gained from different activities such as force on force exercises, limited scope performance tests, equipment testing, vulnerability analyses, and computer modeling; into an overall assessment of the performance of the protection system. The analytic approach employs logic diagrams adapted from the fault and event trees used in probabilistic risk assessment. The synthesis of the results from the various validation activities is accomplished using a method developed by LLNL, based upon Bayes' theorem

  20. Modeling of periodic great earthquakes on the San Andreas fault: Effects of nonlinear crustal rheology

    Science.gov (United States)

    Reches, Ze'ev; Schubert, Gerald; Anderson, Charles

    1994-01-01

    We analyze the cycle of great earthquakes along the San Andreas fault with a finite element numerical model of deformation in a crust with a nonlinear viscoelastic rheology. The viscous component of deformation has an effective viscosity that depends exponentially on the inverse absolute temperature and nonlinearity on the shear stress; the elastic deformation is linear. Crustal thickness and temperature are constrained by seismic and heat flow data for California. The models are for anti plane strain in a 25-km-thick crustal layer having a very long, vertical strike-slip fault; the crustal block extends 250 km to either side of the fault. During the earthquake cycle that lasts 160 years, a constant plate velocity v(sub p)/2 = 17.5 mm yr is applied to the base of the crust and to the vertical end of the crustal block 250 km away from the fault. The upper half of the fault is locked during the interseismic period, while its lower half slips at the constant plate velocity. The locked part of the fault is moved abruptly 2.8 m every 160 years to simulate great earthquakes. The results are sensitive to crustal rheology. Models with quartzite-like rheology display profound transient stages in the velocity, displacement, and stress fields. The predicted transient zone extends about 3-4 times the crustal thickness on each side of the fault, significantly wider than the zone of deformation in elastic models. Models with diabase-like rheology behave similarly to elastic models and exhibit no transient stages. The model predictions are compared with geodetic observations of fault-parallel velocities in northern and central California and local rates of shear strain along the San Andreas fault. The observations are best fit by models which are 10-100 times less viscous than a quartzite-like rheology. Since the lower crust in California is composed of intermediate to mafic rocks, the present result suggests that the in situ viscosity of the crustal rock is orders of magnitude

  1. Illite authigenesis during faulting and fluid flow - a microstructural study of fault rocks

    Science.gov (United States)

    Scheiber, Thomas; Viola, Giulio; van der Lelij, Roelant; Margreth, Annina

    2017-04-01

    Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro-domains were physically separated by

  2. MCNP load balancing and fault tolerance with PVM

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1995-01-01

    Version 4A of the Monte Carlo neutron, photon, and electron transport code MCNP, developed by LANL (Los Alamos National Laboratory), supports distributed-memory multiprocessing through the software package PVM (Parallel Virtual Machine, version 3.1.4). Using PVM for interprocessor communication, MCNP can simultaneously execute a single problem on a cluster of UNIX-based workstations. This capability provided system efficiencies that exceeded 80% on dedicated workstation clusters, however, on heterogeneous or multiuser systems, the performance was limited by the slowest processor (i.e., equal work was assigned to each processor). The next public release of MCNP will provide multiprocessing enhancements that include load balancing and fault tolerance which are shown to dramatically increase multiuser system efficiency and reliability

  3. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    Science.gov (United States)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  4. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2003-02-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene

  5. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2003-02-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene.

  6. Fault-tolerant architecture: Evaluation methodology

    International Nuclear Information System (INIS)

    Battle, R.E.; Kisner, R.A.

    1992-08-01

    The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems

  7. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    Science.gov (United States)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

  8. Analysis of Retransmission Policies for Parallel Data Transmission

    Directory of Open Access Journals (Sweden)

    I. A. Halepoto

    2018-06-01

    Full Text Available Stream control transmission protocol (SCTP is a transport layer protocol, which is efficient, reliable, and connection-oriented as compared to transmission control protocol (TCP and user datagram protocol (UDP. Additionally, SCTP has more innovative features like multihoming, multistreaming and unordered delivery. With multihoming, SCTP establishes multiple paths between a sender and receiver. However, it only uses the primary path for data transmission and the secondary path (or paths for fault tolerance. Concurrent multipath transfer extension of SCTP (CMT-SCTP allows a sender to transmit data in parallel over multiple paths, which increases the overall transmission throughput. Parallel data transmission is beneficial for higher data rates. Parallel transmission or connection is also good in services such as video streaming where if one connection is occupied with errors the transmission continues on alternate links. With parallel transmission, the unordered data packets arrival is very common at receiver. The receiver has to wait until the missing data packets arrive, causing performance degradation while using CMT-SCTP. In order to reduce the transmission delay at the receiver, CMT-SCTP uses intelligent retransmission polices to immediately retransmit the missing packets. The retransmission policies used by CMT-SCTP are RTX-SSTHRESH, RTX-LOSSRATE and RTX-CWND. The main objective of this paper is the performance analysis of the retransmission policies. This paper evaluates RTX-SSTHRESH, RTX-LOSSRATE and RTX-CWND. Simulations are performed on the Network Simulator 2. In the simulations with various scenarios and parameters, it is observed that the RTX-LOSSRATE is a suitable policy.

  9. Hardware security and trust design and deployment of integrated circuits in a threatened environment

    CERN Document Server

    Chaves, Ricardo; Natale, Giorgio; Regazzoni, Francesco

    2017-01-01

    This book provides a comprehensive introduction to hardware security, from specification to implementation. Applications discussed include embedded systems ranging from small RFID tags to satellites orbiting the earth. The authors describe a design and synthesis flow, which will transform a given circuit into a secure design incorporating counter-measures against fault attacks. In order to address the conflict between testability and security, the authors describe innovative design-for-testability (DFT) computer-aided design (CAD) tools that support security challenges, engineered for compliance with existing, commercial tools. Secure protocols are discussed, which protect access to necessary test infrastructures and enable the design of secure access controllers. Covers all aspects of hardware security including design, manufacturing, testing, reliability, validation and utilization; Describes new methods and algorithms for the identification/detection of hardware trojans; Defines new architectures capable o...

  10. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  11. Aftershocks illuminate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    Science.gov (United States)

    Horton, J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  12. Incipient fault detection and power system protection for spaceborne systems

    Science.gov (United States)

    Russell, B. Don; Hackler, Irene M.

    1987-01-01

    A program was initiated to study the feasibility of using advanced terrestrial power system protection techniques for spacecraft power systems. It was designed to enhance and automate spacecraft power distribution systems in the areas of safety, reliability and maintenance. The proposed power management/distribution system is described as well as security assessment and control, incipient and low current fault detection, and the proposed spaceborne protection system. It is noted that the intelligent remote power controller permits the implementation of digital relaying algorithms with both adaptive and programmable characteristics.

  13. Fluid-driven normal faulting earthquake sequences in the Taiwan orogen

    Science.gov (United States)

    Wang, Ling-hua; Rau, Ruey-Juin; Lee, En-Jui

    2017-04-01

    Seismicity in the Central Range of Taiwan shows normal faulting mechanisms with T-axes directing NE, subparallel to the strike of the mountain belt. We analyze earthquake sequences occurred within 2012-2015 in the Nanshan area of northern Taiwan which indicating swarm behavior and migration characteristics. We select events larger than 2.0 from Central Weather Bureau catalog and use the double-difference relocation program hypoDD with waveform cross-correlation in the Nanshan area. We obtained a final count of 1406 (95%) relocated earthquakes. Moreover, we compute focal mechanisms using USGS program HASH by P-wave first motion and S/P ratio picking and 114 fault plane solutions with M 3.0-5.87 were determined. To test for fluid diffusion, we model seismicity using the equation of Shapiro et al. (1997) by fitting earthquake diffusing rate D during the migration period. According to the relocation result, seismicity in the Taiwan orogenic belt present mostly N25E orientation parallel to the mountain belt with the same direction of the tension axis. In addition, another seismic fracture depicted by seismicity rotated 35 degree counterclockwise to the NW direction. Nearly all focal mechanisms are normal fault type. In the Nanshan area, events show N10W distribution with a focal depth range from 5-12 km and illustrate fault plane dipping about 45-60 degree to SW. Three months before the M 5.87 mainshock which occurred in March, 2013, there were some foreshock events occurred in the shallow part of the fault plane of the mainshock. Half a year following the mainshock, earthquakes migrated to the north and south, respectively with processes matched the diffusion model at a rate of 0.2-0.6 m2/s. This migration pattern and diffusion rate offer an evidence of 'fluid-driven' process in the fault zone. We also find the upward migration of earthquakes in the mainshock source region. These phenomena are likely caused by the opening of the permeable conduit due to the M 5

  14. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    Science.gov (United States)

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  15. Alpine Fault, New Zealand, SRTM Shaded Relief and Colored Height

    Science.gov (United States)

    2005-01-01

    The Alpine fault runs parallel to, and just inland of, much of the west coast of New Zealand's South Island. This view was created from the near-global digital elevation model produced by the Shuttle Radar Topography Mission (SRTM) and is almost 500 kilometers (just over 300 miles) wide. Northwest is toward the top. The fault is extremely distinct in the topographic pattern, nearly slicing this scene in half lengthwise. In a regional context, the Alpine fault is part of a system of faults that connects a west dipping subduction zone to the northeast with an east dipping subduction zone to the southwest, both of which occur along the juncture of the Indo-Australian and Pacific tectonic plates. Thus, the fault itself constitutes the major surface manifestation of the plate boundary here. Offsets of streams and ridges evident in the field, and in this view of SRTM data, indicate right-lateral fault motion. But convergence also occurs across the fault, and this causes the continued uplift of the Southern Alps, New Zealand's largest mountain range, along the southeast side of the fault. Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast (image top to bottom) direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations. Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data

  16. Misbheaving Faults: The Expanding Role of Geodetic Imaging in Unraveling Unexpected Fault Slip Behavior

    Science.gov (United States)

    Barnhart, W. D.; Briggs, R.

    2015-12-01

    Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

  17. Numerical analysis of the stability of HTS power cable under fault current considering the gaps in the cable

    International Nuclear Information System (INIS)

    Fang, J.; Li, H.F.; Zhu, J.H.; Zhou, Z.N.; Li, Y.X.; Shen, Z.; Dong, D.L.; Yu, T.; Li, Z.M.; Qiu, M.

    2013-01-01

    Highlights: •The equivalent circuit equations and the heat balance equations were established. •The current distributions of the HTS cable under fault current were obtained. •The temperature curves of conductor layers under fault current were obtained. •The effect of the gap liquid nitrogen on the thermal characteristics was studied. -- Abstract: During the operation of a high temperature superconducting power cable in a real grid, the power cable can be impacted inevitably by large fault current. The study on current distribution and thermal characteristics in the cable under fault current is the foundation to analyze its stability. To analyze the operation situation of 110 kV/3 kA class superconducting cable under the fault current of 25 kA rms for 3 s, the equivalent circuit equations and heat balance equations were established. The current distribution curves and the temperature distribution curves were obtained. The liquid nitrogen which exists in the gaps of HTS cable was taken into consideration, the influence of gap liquid nitrogen on the thermal characteristics was investigated. The analysis results can be used to estimate the security and stability of the superconducting cable

  18. Numerical analysis of the stability of HTS power cable under fault current considering the gaps in the cable

    Energy Technology Data Exchange (ETDEWEB)

    Fang, J., E-mail: fangseer@sina.com [School of Electrical Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li, H.F. [School of Electrical Engineering, Beijing Jiaotong University, Beijing 100044 (China); Zhu, J.H.; Zhou, Z.N. [China Electric Power Research Institute, Beijing 100192 (China); Li, Y.X.; Shen, Z.; Dong, D.L.; Yu, T. [School of Electrical Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li, Z.M.; Qiu, M. [China Electric Power Research Institute, Beijing 100192 (China)

    2013-11-15

    Highlights: •The equivalent circuit equations and the heat balance equations were established. •The current distributions of the HTS cable under fault current were obtained. •The temperature curves of conductor layers under fault current were obtained. •The effect of the gap liquid nitrogen on the thermal characteristics was studied. -- Abstract: During the operation of a high temperature superconducting power cable in a real grid, the power cable can be impacted inevitably by large fault current. The study on current distribution and thermal characteristics in the cable under fault current is the foundation to analyze its stability. To analyze the operation situation of 110 kV/3 kA class superconducting cable under the fault current of 25 kA{sub rms} for 3 s, the equivalent circuit equations and heat balance equations were established. The current distribution curves and the temperature distribution curves were obtained. The liquid nitrogen which exists in the gaps of HTS cable was taken into consideration, the influence of gap liquid nitrogen on the thermal characteristics was investigated. The analysis results can be used to estimate the security and stability of the superconducting cable.

  19. NMR-MPar: A Fault-Tolerance Approach for Multi-Core and Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Vanessa Vargas

    2018-03-01

    Full Text Available Multi-core and many-core processors are a promising solution to achieve high performance by maintaining a lower power consumption. However, the degree of miniaturization makes them more sensitive to soft-errors. To improve the system reliability, this work proposes a fault-tolerance approach based on redundancy and partitioning principles called N-Modular Redundancy and M-Partitions (NMR-MPar. By combining both principles, this approach allows multi-/many-core processors to perform critical functions in mixed-criticality systems. Benefiting from the capabilities of these devices, NMR-MPar creates different partitions that perform independent functions. For critical functions, it is proposed that N partitions with the same configuration participate of an N-modular redundancy system. In order to validate the approach, a case study is implemented on the KALRAY Multi-Purpose Processing Array (MPPA-256 many-core processor running two parallel benchmark applications. The traveling salesman problem and matrix multiplication applications were selected to test different device’s resources. The effectiveness of NMR-MPar is assessed by software-implemented fault-injection. For evaluation purposes, it is considered that the system is intended to be used in avionics. Results show the improvement of the application reliability by two orders of magnitude when implementing NMR-MPar on the system. Finally, this work opens the possibility to use massive parallelism for dependable applications in embedded systems.

  20. Security Vulnerability Profiles of Mission Critical Software: Empirical Analysis of Security Related Bug Reports

    Science.gov (United States)

    Goseva-Popstojanova, Katerina; Tyo, Jacob

    2017-01-01

    While some prior research work exists on characteristics of software faults (i.e., bugs) and failures, very little work has been published on analysis of software applications vulnerabilities. This paper aims to contribute towards filling that gap by presenting an empirical investigation of application vulnerabilities. The results are based on data extracted from issue tracking systems of two NASA missions. These data were organized in three datasets: Ground mission IVV issues, Flight mission IVV issues, and Flight mission Developers issues. In each dataset, we identified security related software bugs and classified them in specific vulnerability classes. Then, we created the security vulnerability profiles, i.e., determined where and when the security vulnerabilities were introduced and what were the dominating vulnerabilities classes. Our main findings include: (1) In IVV issues datasets the majority of vulnerabilities were code related and were introduced in the Implementation phase. (2) For all datasets, around 90 of the vulnerabilities were located in two to four subsystems. (3) Out of 21 primary classes, five dominated: Exception Management, Memory Access, Other, Risky Values, and Unused Entities. Together, they contributed from 80 to 90 of vulnerabilities in each dataset.