WorldWideScience

Sample records for fault location algorithm

  1. Modern optimization algorithms for fault location estimation in power systems

    Directory of Open Access Journals (Sweden)

    A. Sanad Ahmed

    2017-10-01

    Full Text Available This paper presents a fault location estimation approach in two terminal transmission lines using Teaching Learning Based Optimization (TLBO technique, and Harmony Search (HS technique. Also, previous methods were discussed such as Genetic Algorithm (GA, Artificial Bee Colony (ABC, Artificial neural networks (ANN and Cause & effect (C&E with discussing advantages and disadvantages of all methods. Initial data for proposed techniques are post-fault measured voltages and currents from both ends, along with line parameters as initial inputs as well. This paper deals with several types of faults, L-L-L, L-L-L-G, L-L-G and L-G. Simulation of the model was performed on SIMULINK by extracting initial inputs from SIMULINK to MATLAB, where the objective function specifies the fault location with a very high accuracy, precision and within a very short time. Future works are discussed showing the benefit behind using the Differential Learning TLBO (DLTLBO was discussed as well.

  2. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    Science.gov (United States)

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  3. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    A. H. Al-Mohammed

    2014-01-01

    Full Text Available This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs, when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research.

  4. A new and accurate fault location algorithm for combined transmission lines using Adaptive Network-Based Fuzzy Inference System

    Energy Technology Data Exchange (ETDEWEB)

    Sadeh, Javad; Afradi, Hamid [Electrical Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad, P.O. Box: 91775-1111, Mashhad (Iran)

    2009-11-15

    This paper presents a new and accurate algorithm for locating faults in a combined overhead transmission line with underground power cable using Adaptive Network-Based Fuzzy Inference System (ANFIS). The proposed method uses 10 ANFIS networks and consists of 3 stages, including fault type classification, faulty section detection and exact fault location. In the first part, an ANFIS is used to determine the fault type, applying four inputs, i.e., fundamental component of three phase currents and zero sequence current. Another ANFIS network is used to detect the faulty section, whether the fault is on the overhead line or on the underground cable. Other eight ANFIS networks are utilized to pinpoint the faults (two for each fault type). Four inputs, i.e., the dc component of the current, fundamental frequency of the voltage and current and the angle between them, are used to train the neuro-fuzzy inference systems in order to accurately locate the faults on each part of the combined line. The proposed method is evaluated under different fault conditions such as different fault locations, different fault inception angles and different fault resistances. Simulation results confirm that the proposed method can be used as an efficient means for accurate fault location on the combined transmission lines. (author)

  5. Comparing Different Fault Identification Algorithms in Distributed Power System

    Science.gov (United States)

    Alkaabi, Salim

    A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.

  6. Fault location algorithms for optical networks

    OpenAIRE

    Mas Machuca, Carmen; Thiran, Patrick

    2005-01-01

    Today, there is no doubt that optical networks are the solution to the explosion of Internet traffic that two decades ago we only dreamed about. They offer high capacity with the use of Wavelength Division Multiplexing (WDM) techniques among others. However, this increase of available capacity can be betrayed by the high quantity of information that can be lost when a failure occurs because not only one, but several channels will then be interrupted. Efficient fault detection and location mec...

  7. A Hybrid Algorithm for Fault Locating in Looped Microgrids

    DEFF Research Database (Denmark)

    Beheshtaein, Siavash; Savaghebi, Mehdi; Quintero, Juan Carlos Vasquez

    2016-01-01

    Protection is the last obstacle to realizing the idea of microgrid. Some of the main challenges in microgrid protection include topology changes of microgrid, week-infeed fault, bidirectional power flow effects, blinding of the protection, sympathetic tripping, high impedance fault, and low voltage...... ride through (LVRT). Besides these challenges, it is desired to eliminate the relays for distribution lines and locate faults based on distributed generations (DGs) voltage or current. On the other hands increasing in the number of DGs and lines would result in high computation burden and degradation...

  8. Statistical Feature Extraction for Fault Locations in Nonintrusive Fault Detection of Low Voltage Distribution Systems

    Directory of Open Access Journals (Sweden)

    Hsueh-Hsien Chang

    2017-04-01

    Full Text Available This paper proposes statistical feature extraction methods combined with artificial intelligence (AI approaches for fault locations in non-intrusive single-line-to-ground fault (SLGF detection of low voltage distribution systems. The input features of the AI algorithms are extracted using statistical moment transformation for reducing the dimensions of the power signature inputs measured by using non-intrusive fault monitoring (NIFM techniques. The data required to develop the network are generated by simulating SLGF using the Electromagnetic Transient Program (EMTP in a test system. To enhance the identification accuracy, these features after normalization are given to AI algorithms for presenting and evaluating in this paper. Different AI techniques are then utilized to compare which identification algorithms are suitable to diagnose the SLGF for various power signatures in a NIFM system. The simulation results show that the proposed method is effective and can identify the fault locations by using non-intrusive monitoring techniques for low voltage distribution systems.

  9. Single-phased Fault Location on Transmission Lines Using Unsynchronized Voltages

    Directory of Open Access Journals (Sweden)

    ISTRATE, M.

    2009-10-01

    Full Text Available The increased accuracy into the fault's detection and location makes it easier for maintenance, this being the reason to develop new possibilities for a precise estimation of the fault location. In the field literature, many methods for fault location using voltages and currents measurements at one or both terminals of power grids' lines are presented. The double-end synchronized data algorithms are very precise, but the current transformers can limit the accuracy of these estimations. The paper presents an algorithm to estimate the location of the single-phased faults which uses only voltage measurements at both terminals of the transmission lines by eliminating the error due to current transformers and without introducing the restriction of perfect data synchronization. In such conditions, the algorithm can be used with the actual equipment of the most power grids, the installation of phasor measurement units with GPS system synchronized timer not being compulsory. Only the positive sequence of line parameters and sources are used, thus, eliminating the incertitude in zero sequence parameter estimation. The algorithm is tested using the results of EMTP-ATP simulations, after the validation of the ATP models on the basis of registered results in a real power grid.

  10. Application of ant colony optimization in NPP classification fault location

    International Nuclear Information System (INIS)

    Xie Chunli; Liu Yongkuo; Xia Hong

    2009-01-01

    Nuclear Power Plant is a highly complex structural system with high safety requirements. Fault location appears to be particularly important to enhance its safety. Ant Colony Optimization is a new type of optimization algorithm, which is used in the fault location and classification of nuclear power plants in this paper. Taking the main coolant system of the first loop as the study object, using VB6.0 programming technology, the NPP fault location system is designed, and is tested against the related data in the literature. Test results show that the ant colony optimization can be used in the accurate classification fault location in the nuclear power plants. (authors)

  11. Fault location in underground cables using ANFIS nets and discrete wavelet transform

    Directory of Open Access Journals (Sweden)

    Shimaa Barakat

    2014-12-01

    Full Text Available This paper presents an accurate algorithm for locating faults in a medium voltage underground power cable using a combination of Adaptive Network-Based Fuzzy Inference System (ANFIS and discrete wavelet transform (DWT. The proposed method uses five ANFIS networks and consists of 2 stages, including fault type classification and exact fault location. In the first part, an ANFIS is used to determine the fault type, applying four inputs, i.e., the maximum detailed energy of three phase and zero sequence currents. Other four ANFIS networks are utilized to pinpoint the faults (one for each fault type. Four inputs, i.e., the maximum detailed energy of three phase and zero sequence currents, are used to train the neuro-fuzzy inference systems in order to accurately locate the faults on the cable. The proposed method is evaluated under different fault conditions such as different fault locations, different fault inception angles and different fault resistances.

  12. Fault Identification Algorithm Based on Zone-Division Wide Area Protection System

    Directory of Open Access Journals (Sweden)

    Xiaojun Liu

    2014-04-01

    Full Text Available As the power grid becomes more magnified and complicated, wide-area protection system in the practical engineering application is more and more restricted by the communication level. Based on the concept of limitedness of wide-area protection system, the grid with complex structure is divided orderly in this paper, and fault identification and protection action are executed in each divided zone to reduce the pressure of the communication system. In protection zone, a new wide-area protection algorithm based on positive sequence fault components directional comparison principle is proposed. The special associated intelligent electronic devices (IEDs zones which contain buses and transmission lines are created according to the installation location of the IEDs. When a fault occurs, with the help of the fault information collecting and sharing from associated zones with the fault discrimination principle defined in this paper, the IEDs can identify the fault location and remove the fault according to the predetermined action strategy. The algorithm will not be impacted by the load changes and transition resistance and also has good adaptability in open phase running power system. It can be used as a main protection, and it also can be taken into account for the back-up protection function. The results of cases study show that, the division method of the wide-area protection system and the proposed algorithm are effective.

  13. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines.

    Science.gov (United States)

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance.

  14. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi

    2018-02-12

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a statistical approach. Specifically, a simulation model that mimics the theoretical performances of the inspected PV system is designed. Residuals, which are the difference between the measured and estimated output data, are used as a fault indicator. Indeed, residuals are used as the input for the Multivariate CUmulative SUM (MCUSUM) algorithm to detect potential faults. We evaluated the proposed method by using data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  15. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  16. Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Youssef Menchafou

    2016-03-01

    Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.

  17. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    Science.gov (United States)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a

  18. A method for detection and location of high resistance earth faults

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S; Lehtonen, M [VTT Energy, Espoo (Finland); Antila, E [ABB Transmit Oy (Finland)

    1998-08-01

    In the first part of this presentation, the theory of earth faults in unearthed and compensated power systems is briefly presented. The main factors affecting the high resistance fault detection are outlined and common practices for earth fault protection in present systems are summarized. The algorithms of the new method for high resistance fault detection and location are then presented. These are based on the change of neutral voltage and zero sequence currents, measured at the high voltage / medium voltage substation and also at the distribution line locations. The performance of the method is analyzed, and the possible error sources discussed. Among these are, for instance, switching actions, thunder storms and heavy snow fall. The feasibility of the method is then verified by an analysis based both on simulated data, which was derived using an EMTP-ATP simulator, and by real system data recorded during field tests at three substations. For the error source analysis, some real case data recorded during natural power system events, is also used

  19. Fault Classification and Location in Transmission Lines Using Traveling Waves Modal Components and Continuous Wavelet Transform (CWT

    Directory of Open Access Journals (Sweden)

    Farhad Namdari

    2016-06-01

    Full Text Available Accurate fault classification and localization are the bases of protection for transmission systems. This paper presents a new method for classifying and showing location of faults by travelling waves and modal analysis. In the proposed method, characteristics of different faults are investigated using Clarke transformation and initial current traveling wave; then, appropriate indices are introduced to identify different types of faults. Continuous wavelet transform (CWT is employed to extract information of current and voltage travelling waves. Fault location and classification algorithm is being designed according to wavelet transform coefficients relating to current and voltage modal components. The performance of the proposed method is tested for different fault conditions (different fault distance, different fault resistances, and different fault inception angles by using PSCAD and MATLAB with satisfactory results

  20. k-means algorithm and mixture distributions for locating faults in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Mora-Florez, J. [The Technological University of Pereira, La Julita, Ciudad Universitaria, Pereira, Risaralda (Colombia); Cormane-Angarita, J.; Ordonez-Plata, G. [The Industrial University of Santander (Colombia)

    2009-05-15

    Enhancement of power distribution system reliability requires of a considerable investment in studies and equipment, however, not all the utilities have the capability to spend time and money to assume it. Therefore, any strategy that allows the improvement of reliability should be reflected directly in the reduction of the duration and frequency interruption indexes (SAIFI and SAIDI). In this paper, an alternative solution to the problem of power service continuity associated to fault location is presented. A methodology of statistical nature based on finite mixtures is proposed. A statistical model is obtained from the extraction of the magnitude of the voltage sag registered during a fault event, along with the network parameters and topology. The objective is to offer an economic alternative of easy implementation for the development of strategies oriented to improve the reliability from the reduction of the restoration times in power distribution systems. In the application case for an application example in a power distribution system, the faulted zones were identified, having low error rates. (author)

  1. Smart intimation and location of faults in distribution system

    Science.gov (United States)

    Hari Krishna, K.; Srinivasa Rao, B.

    2018-04-01

    Location of faults in the distribution system is one of the most complicated problems that we are facing today. Identification of fault location and severity of fault within a short time is required to provide continuous power supply but fault identification and information transfer to the operator is the biggest challenge in the distribution network. This paper proposes a fault location method in the distribution system based on Arduino nano and GSM module with flame sensor. The main idea is to locate the fault in the distribution transformer by sensing the arc coming out from the fuse element. The biggest challenge in the distribution network is to identify the location and the severity of faults under different conditions. Well operated transmission and distribution systems will play a key role for uninterrupted power supply. Whenever fault occurs in the distribution system the time taken to locate and eliminate the fault has to be reduced. The proposed design was achieved with flame sensor and GSM module. Under faulty condition, the system will automatically send an alert message to the operator in the distribution system, about the abnormal conditions near the transformer, site code and its exact location for possible power restoration.

  2. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Science.gov (United States)

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  3. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Directory of Open Access Journals (Sweden)

    Jian Wan

    2011-06-01

    Full Text Available This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  4. Fault Locating, Prediction and Protection (FLPPS)

    Energy Technology Data Exchange (ETDEWEB)

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  5. Interactive animation of fault-tolerant parallel algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Apgar, S.W.

    1992-02-01

    Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes.

  6. Accurate fault location algorithm on power transmission lines with use of two-end unsynchronized measurements

    Directory of Open Access Journals (Sweden)

    Mohamed Dine

    2012-01-01

    Full Text Available This paper presents a new approach to fault location on power transmission lines. This approach uses two-end unsynchronised measurements of the line and benefits from the advantages of digital technology and numerical relaying, which are available today and can easily be applied for off-line analysis. The approach is to modify the apparent impedance method using a very simple first-order formula. The new method is independent of fault resistance, source impedances and pre-fault currents. In addition, the data volume communicated between relays is sufficiently small enough to be transmitted easily using a digital protection channel. The proposed approach is tested via digital simulation using MATLand the applied test results corroborate the superior performance of the proposed approach.

  7. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  8. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy, Helsinki (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1996-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  9. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1998-08-01

    In this chapter, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerized relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  10. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy, Helsinki (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1997-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  11. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  12. A New Fault Diagnosis Algorithm for PMSG Wind Turbine Power Converters under Variable Wind Speed Conditions

    Directory of Open Access Journals (Sweden)

    Yingning Qiu

    2016-07-01

    Full Text Available Although Permanent Magnet Synchronous Generator (PMSG wind turbines (WTs mitigate gearbox impacts, they requires high reliability of generators and converters. Statistical analysis shows that the failure rate of direct-drive PMSG wind turbines’ generators and inverters are high. Intelligent fault diagnosis algorithms to detect inverters faults is a premise for the condition monitoring system aimed at improving wind turbines’ reliability and availability. The influences of random wind speed and diversified control strategies lead to challenges for developing intelligent fault diagnosis algorithms for converters. This paper studies open-circuit fault features of wind turbine converters in variable wind speed situations through systematic simulation and experiment. A new fault diagnosis algorithm named Wind Speed Based Normalized Current Trajectory is proposed and used to accurately detect and locate faulted IGBT in the circuit arms. It is compared to direct current monitoring and current vector trajectory pattern approaches. The results show that the proposed method has advantages in the accuracy of fault diagnosis and has superior anti-noise capability in variable wind speed situations. The impact of the control strategy is also identified. Experimental results demonstrate its applicability on practical WT condition monitoring system which is used to improve wind turbine reliability and reduce their maintenance cost.

  13. New algorithm to detect modules in a fault tree for a PSA

    International Nuclear Information System (INIS)

    Jung, Woo Sik

    2015-01-01

    A module or independent subtree is a part of a fault tree whose child gates or basic events are not repeated in the remaining part of the fault tree. Modules are necessarily employed in order to reduce the computational costs of fault tree quantification. This paper presents a new linear time algorithm to detect modules of large fault trees. The size of cut sets can be substantially reduced by replacing independent subtrees in a fault tree with super-components. Chatterjee and Birnbaum developed properties of modules, and demonstrated their use in the fault tree analysis. Locks expanded the concept of modules to non-coherent fault trees. Independent subtrees were manually identified while coding a fault tree for computer analysis. However, nowadays, the independent subtrees are automatically identified by the fault tree solver. A Dutuit and Rauzy (DR) algorithm to detect modules of a fault tree for coherent or non-coherent fault tree was proposed in 1996. It has been well known that this algorithm quickly detects modules since it is a linear time algorithm. The new algorithm minimizes computational memory and quickly detects modules. Furthermore, it can be easily implemented into industry fault tree solvers that are based on traditional Boolean algebra, binary decision diagrams (BDDs), or Zero-suppressed BDDs. The new algorithm employs only two scalar variables in Eqs. to that are volatile information. After finishing the traversal and module detection of each node, the volatile information is destroyed. Thus, the new algorithm does not employ any other additional computational memory and operations. It is recommended that this method be implemented into fault tree solvers for efficient probabilistic safety assessment (PSA) of nuclear power plants

  14. New algorithm to detect modules in a fault tree for a PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong University, Seoul (Korea, Republic of)

    2015-05-15

    A module or independent subtree is a part of a fault tree whose child gates or basic events are not repeated in the remaining part of the fault tree. Modules are necessarily employed in order to reduce the computational costs of fault tree quantification. This paper presents a new linear time algorithm to detect modules of large fault trees. The size of cut sets can be substantially reduced by replacing independent subtrees in a fault tree with super-components. Chatterjee and Birnbaum developed properties of modules, and demonstrated their use in the fault tree analysis. Locks expanded the concept of modules to non-coherent fault trees. Independent subtrees were manually identified while coding a fault tree for computer analysis. However, nowadays, the independent subtrees are automatically identified by the fault tree solver. A Dutuit and Rauzy (DR) algorithm to detect modules of a fault tree for coherent or non-coherent fault tree was proposed in 1996. It has been well known that this algorithm quickly detects modules since it is a linear time algorithm. The new algorithm minimizes computational memory and quickly detects modules. Furthermore, it can be easily implemented into industry fault tree solvers that are based on traditional Boolean algebra, binary decision diagrams (BDDs), or Zero-suppressed BDDs. The new algorithm employs only two scalar variables in Eqs. to that are volatile information. After finishing the traversal and module detection of each node, the volatile information is destroyed. Thus, the new algorithm does not employ any other additional computational memory and operations. It is recommended that this method be implemented into fault tree solvers for efficient probabilistic safety assessment (PSA) of nuclear power plants.

  15. Fault locator of an allyl chloride plant

    Directory of Open Access Journals (Sweden)

    Savković-Stevanović Jelenka B.

    2004-01-01

    Full Text Available Process safety analysis, which includes qualitative fault event identification, the relative frequency and event probability functions, as well as consequence analysis, was performed on an allye chloride plant. An event tree for fault diagnosis and cognitive reliability analysis, as well as a troubleshooting system, were developed. Fuzzy inductive reasoning illustrated the advantages compared to crisp inductive reasoning. A qualitative model forecast the future behavior of the system in the case of accident detection and then compared it with the actual measured data. A cognitive model including qualitative and quantitative information by fuzzy logic of the incident scenario was derived as a fault locator for an ally! chloride plant. The obtained results showed the successful application of cognitive dispersion modeling to process safety analysis. A fuzzy inductive reasoner illustrated good performance to discriminate between different types of malfunctions. This fault locator allowed risk analysis and the construction of a fault tolerant system. This study is the first report in the literature showing the cognitive reliability analysis method.

  16. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  17. Automatic identification of otological drilling faults: an intelligent recognition algorithm.

    Science.gov (United States)

    Cao, Tianyang; Li, Xisheng; Gao, Zhiqiang; Feng, Guodong; Shen, Peng

    2010-06-01

    This article presents an intelligent recognition algorithm that can recognize milling states of the otological drill by fusing multi-sensor information. An otological drill was modified by the addition of sensors. The algorithm was designed according to features of the milling process and is composed of a characteristic curve, an adaptive filter and a rule base. The characteristic curve can weaken the impact of the unstable normal milling process and reserve the features of drilling faults. The adaptive filter is capable of suppressing interference in the characteristic curve by fusing multi-sensor information. The rule base can identify drilling faults through the filtering result data. The experiments were repeated on fresh porcine scapulas, including normal milling and two drilling faults. The algorithm has high rates of identification. This study shows that the intelligent recognition algorithm can identify drilling faults under interference conditions. (c) 2010 John Wiley & Sons, Ltd.

  18. Hydraulic Pump Fault Diagnosis Control Research Based on PARD-BP Algorithm

    Directory of Open Access Journals (Sweden)

    LV Dongmei

    2014-12-01

    Full Text Available Combining working principle and failure mechanism of RZU2000HM hydraulic press, with its present fault cases being collected, the working principle of the oil pressure and faults phenomenon of the hydraulic power unit –swash-plate axial piston pump were studied with some emphasis, whose faults will directly affect the dynamic performance of the oil pressure and flow. In order to make hydraulic power unit work reliably, PARD-BP (Pruning Algorithm based Random Degree neural network fault algorithm was introduced, with swash-plate axial piston pump’s vibration fault sample data regarded as input, and fault mode matrix regarded as target output, so that PARD-BP algorithm could be trained. In the end, the vibration results were verified by the vibration modal test, and it was shown that the biggest upward peaks of vacuum pump in X-direction, Y-direction and Z- direction have fallen by 30.49 %, 21.13 % and 18.73 % respectively, so that the reliability of the fact that PARD-BP algorithm could be used for the online fault detection and diagnosis of the hydraulic pump was verified.

  19. Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains

    Directory of Open Access Journals (Sweden)

    Volker Turau

    2018-05-01

    Full Text Available The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm.

  20. Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meenakshi Panda

    2014-01-01

    Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.

  1. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    Science.gov (United States)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  2. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  3. Development of an accurate transmission line fault locator using the global positioning system satellites

    Science.gov (United States)

    Lee, Harry

    1994-01-01

    A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.

  4. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.

    Science.gov (United States)

    Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto

    2017-09-29

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  5. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2017-09-01

    Full Text Available The use of Unmanned Aerial Vehicles (UAVs has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs’ flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  6. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  7. Induced Voltages Ratio-Based Algorithm for Fault Detection, and Faulted Phase and Winding Identification of a Three-Winding Power Transformer

    Directory of Open Access Journals (Sweden)

    Byung Eun Lee

    2014-09-01

    Full Text Available This paper proposes an algorithm for fault detection, faulted phase and winding identification of a three-winding power transformer based on the induced voltages in the electrical power system. The ratio of the induced voltages of the primary-secondary, primary-tertiary and secondary-tertiary windings is the same as the corresponding turns ratio during normal operating conditions, magnetic inrush, and over-excitation. It differs from the turns ratio during an internal fault. For a single phase and a three-phase power transformer with wye-connected windings, the induced voltages of each pair of windings are estimated. For a three-phase power transformer with delta-connected windings, the induced voltage differences are estimated to use the line currents, because the delta winding currents are practically unavailable. Six detectors are suggested for fault detection. An additional three detectors and a rule for faulted phase and winding identification are presented as well. The proposed algorithm can not only detect an internal fault, but also identify the faulted phase and winding of a three-winding power transformer. The various test results with Electromagnetic Transients Program (EMTP-generated data show that the proposed algorithm successfully discriminates internal faults from normal operating conditions including magnetic inrush and over-excitation. This paper concludes by implementing the algorithm into a prototype relay based on a digital signal processor.

  8. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  9. An Algorithm for Fault-Tree Construction

    DEFF Research Database (Denmark)

    Taylor, J. R.

    1982-01-01

    An algorithm for performing certain parts of the fault tree construction process is described. Its input is a flow sheet of the plant, a piping and instrumentation diagram, or a wiring diagram of the circuits, to be analysed, together with a standard library of component functional and failure...

  10. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  11. Online Location of Faults on AC Cables in Underground Transmission Systems

    DEFF Research Database (Denmark)

    Jensen, Christian Flytkjær

    under fault conditions well, but the accuracy of the calculated impedance is low for fault location purposes. The neural networks can therefore not be trained and no impedance-based fault location method can be used for crossbonded cables or hybrid lines. The use of travelling wave-based methods...... connection to verify the proposed method. Faults, at reduced a voltage are artificially applied in the cable system and the transient response is measured at two terminals at the cable’s ends. The measurements are time-synchronised and it is found that a very accurate estimation of the fault location can......A transmission grid is normally laid out as an almost pure overhead line (OHL) network. The introduction of transmission voltage level XLPE cables and the increasing interest in the environmental impact of OHL has resulted in an increasing interest in the use of underground cables on transmission...

  12. The Study of Fault Location for Front-End Electronics System

    International Nuclear Information System (INIS)

    Zhang Fan; Wang Dong; Huang Guangming; Zhou Daicui

    2009-01-01

    Since some devices on the latest developed 250 ALICE/PHOS Front-end electronics (FEE) system cards had been partly or completely damaged during lead-free soldering. To alleviate the influence on the performance of FEE system and to locate fault related FPGA accurately, we should find a method for locating fault of FEE system based on the deep study of FPGA configuration scheme. It emphasized on the problems such as JTAG configuration of multi-devices, PS configuration based on EPC series configuration devices and auto re-configuration of FPGA. The result of the massive FEE system cards testing and repairing show that that location method can accurately and quickly target the fault point related FPGA on FEE system cards. (authors)

  13. Locating Very-Low-Frequency Earthquakes in the San Andreas Fault.

    Science.gov (United States)

    Peña-Castro, A. F.; Harrington, R. M.; Cochran, E. S.

    2016-12-01

    The portion of tectonic fault where rheological properties transtition from brittle to ductile hosts a variety of seismic signals suggesting a range of slip velocities. In subduction zones, the two dominantly observed seismic signals include very-low frequency earthquakes ( VLFEs), and low-frequency earthquakes (LFEs) or tectonic tremor. Tremor and LFE are also commonly observed in transform faults, however, VLFEs have been reported dominantly in subduction zone environments. Here we show some of the first known observations of VLFEs occurring on a plate boundary transform fault, the San Andreas Fault (SAF) between the Cholame-Parkfield segment in California. We detect VLFEs using both permanent and temporary stations in 2010-2011 within approximately 70 km of Cholame, California. We search continous waveforms filtered from 0.02-0.05 Hz, and remove time windows containing teleseismic events and local earthquakes, as identified in the global Centroid Moment Tensor (CMT) and the Northern California Seismic Network (NCSN) catalog. We estimate the VLFE locations by converting the signal into envelopes, and cross-correlating them for phase-picking, similar to procedures used for locating tectonic tremor. We first perform epicentral location using a grid search method and estimate a hypocenter location using Hypoinverse and a shear-wave velocity model when the epicenter is located close to the SAF trace. We account for the velocity contrast across the fault using separate 1D velocity models for stations on each side. Estimated hypocentral VLFE depths are similar to tremor catalog depths ( 15-30 km). Only a few VLFEs produced robust hypocentral locations, presumably due to the difficulty in picking accurate phase arrivals with such a low-frequency signal. However, for events for which no location could be obtained, the moveout of phase arrivals across the stations were similar in character, suggesting that other observed VLFEs occurred in close proximity.

  14. SLG(Single-Line-to-Ground Fault Location in NUGS(Neutral Un-effectively Grounded System

    Directory of Open Access Journals (Sweden)

    Zhang Wenhai

    2018-01-01

    Full Text Available This paper reviews the SLG(Single-Line-to-Ground fault location methods in NUGS(Neutral Un-effectively Grounded System, including ungrounded system, resonant grounded system and high-resistance grounded system which are widely used in Northern Europe and China. This type of fault is hard to detect and location because fault current is the sum of capacitance current of the system which is always small(about tens of amperes. The characteristics of SLG fault in NUGS and the fault location methods are introduced in the paper.

  15. A fast BDD algorithm for large coherent fault trees analysis

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jaejoo

    2004-01-01

    Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If-Then-Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work. This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm

  16. Fiber fault location utilizing traffic signal in optical network.

    Science.gov (United States)

    Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi

    2013-10-07

    We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.

  17. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    International Nuclear Information System (INIS)

    Wang, Y.

    2013-01-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  18. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.

    2013-07-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  19. Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm

    Directory of Open Access Journals (Sweden)

    Guangbin Wang

    2015-01-01

    Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.

  20. Empirical Relationships Among Magnitude and Surface Rupture Characteristics of Strike-Slip Faults: Effect of Fault (System) Geometry and Observation Location, Dervided From Numerical Modeling

    Science.gov (United States)

    Zielke, O.; Arrowsmith, J.

    2007-12-01

    In order to determine the magnitude of pre-historic earthquakes, surface rupture length, average and maximum surface displacement are utilized, assuming that an earthquake of a specific size will cause surface features of correlated size. The well known Wells and Coppersmith (1994) paper and other studies defined empirical relationships between these and other parameters, based on historic events with independently known magnitude and rupture characteristics. However, these relationships show relatively large standard deviations and they are based only on a small number of events. To improve these first-order empirical relationships, the observation location relative to the rupture extent within the regional tectonic framework should be accounted for. This however cannot be done based on natural seismicity because of the limited size of datasets on large earthquakes. We have developed the numerical model FIMozFric, based on derivations by Okada (1992) to create synthetic seismic records for a given fault or fault system under the influence of either slip- or stress boundary conditions. Our model features A) the introduction of an upper and lower aseismic zone, B) a simple Coulomb friction law, C) bulk parameters simulating fault heterogeneity, and D) a fault interaction algorithm handling the large number of fault patches (typically 5,000-10,000). The joint implementation of these features produces well behaved synthetic seismic catalogs and realistic relationships among magnitude and surface rupture characteristics which are well within the error of the results by Wells and Coppersmith (1994). Furthermore, we use the synthetic seismic records to show that the relationships between magntiude and rupture characteristics are a function of the observation location within the regional tectonic framework. The model presented here can to provide paleoseismologists with a tool to improve magnitude estimates from surface rupture characteristics, by incorporating the

  1. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  2. An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis

    Directory of Open Access Journals (Sweden)

    Qihang Wang

    2016-07-01

    Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.

  3. A Novel Dual Separate Paths (DSP) Algorithm Providing Fault-Tolerant Communication for Wireless Sensor Networks.

    Science.gov (United States)

    Tien, Nguyen Xuan; Kim, Semog; Rhee, Jong Myung; Park, Sang Yoon

    2017-07-25

    Fault tolerance has long been a major concern for sensor communications in fault-tolerant cyber physical systems (CPSs). Network failure problems often occur in wireless sensor networks (WSNs) due to various factors such as the insufficient power of sensor nodes, the dislocation of sensor nodes, the unstable state of wireless links, and unpredictable environmental interference. Fault tolerance is thus one of the key requirements for data communications in WSN applications. This paper proposes a novel path redundancy-based algorithm, called dual separate paths (DSP), that provides fault-tolerant communication with the improvement of the network traffic performance for WSN applications, such as fault-tolerant CPSs. The proposed DSP algorithm establishes two separate paths between a source and a destination in a network based on the network topology information. These paths are node-disjoint paths and have optimal path distances. Unicast frames are delivered from the source to the destination in the network through the dual paths, providing fault-tolerant communication and reducing redundant unicast traffic for the network. The DSP algorithm can be applied to wired and wireless networks, such as WSNs, to provide seamless fault-tolerant communication for mission-critical and life-critical applications such as fault-tolerant CPSs. The analyzed and simulated results show that the DSP-based approach not only provides fault-tolerant communication, but also improves network traffic performance. For the case study in this paper, when the DSP algorithm was applied to high-availability seamless redundancy (HSR) networks, the proposed DSP-based approach reduced the network traffic by 80% to 88% compared with the standard HSR protocol, thus improving network traffic performance.

  4. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-01-01

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431

  5. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  6. Line-to-Line Fault Analysis and Location in a VSC-Based Low-Voltage DC Distribution Network

    Directory of Open Access Journals (Sweden)

    Shi-Min Xue

    2018-03-01

    Full Text Available A DC cable short-circuit fault is the most severe fault type that occurs in DC distribution networks, having a negative impact on transmission equipment and the stability of system operation. When a short-circuit fault occurs in a DC distribution network based on a voltage source converter (VSC, an in-depth analysis and characterization of the fault is of great significance to establish relay protection, devise fault current limiters and realize fault location. However, research on short-circuit faults in VSC-based low-voltage DC (LVDC systems, which are greatly different from high-voltage DC (HVDC systems, is currently stagnant. The existing research in this area is not conclusive, with further study required to explain findings in HVDC systems that do not fit with simulated results or lack thorough theoretical analyses. In this paper, faults are divided into transient- and steady-state faults, and detailed formulas are provided. A more thorough and practical theoretical analysis with fewer errors can be used to develop protection schemes and short-circuit fault locations based on transient- and steady-state analytic formulas. Compared to the classical methods, the fault analyses in this paper provide more accurate computed results of fault current. Thus, the fault location method can rapidly evaluate the distance between the fault and converter. The analyses of error increase and an improved handshaking method coordinating with the proposed location method are presented.

  7. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  8. Algorithms and programs for consequence diagram and fault tree construction

    International Nuclear Information System (INIS)

    Hollo, E.; Taylor, J.R.

    1976-12-01

    A presentation of algorithms and programs for consequence diagram and sequential fault tree construction that are intended for reliability and disturbance analysis of large systems. The system to be analyzed must be given as a block diagram formed by mini fault trees of individual system components. The programs were written in LISP programming language and run on a PDP8 computer with 8k words of storage. A description is given of the methods used and of the program construction and working. (author)

  9. Fault Detection and Location of IGBT Short-Circuit Failure in Modular Multilevel Converters

    Directory of Open Access Journals (Sweden)

    Bin Jiang

    2018-06-01

    Full Text Available A single fault detection and location for Modular Multilevel Converter (MMC is of great significance, as numbers of sub-modules (SMs in MMC are connected in series. In this paper, a novel fault detection and location method is proposed for MMC in terms of the Insulated Gate Bipolar Translator (IGBT short-circuit failure in SM. The characteristics of IGBT short-circuit failures are analyzed, based on which a Differential Comparison Low-Voltage Detection Method (DCLVDM is proposed to detect the short-circuit fault. Lastly, the faulty IGBT is located based on the capacitor voltage of the faulty SM by Continuous Wavelet Transform (CWT. Simulations have been done in the simulation software PSCAD/EMTDC and the results confirm the validity and reliability of the proposed method.

  10. Algorithms and programs for evaluating fault trees with multi-state components

    International Nuclear Information System (INIS)

    Wickenhaeuser, A.

    1989-07-01

    Part 1 and 2 of the report contain a summary overview of methods and algorithms for the solution of fault tree analysis problems. The following points are treated in detail: Treatment of fault tree components with more than two states. Acceleration of the solution algorithms. Decomposition and modularization of extensive systems. Calculation of the structural function and the exact occurrence probability. Treatment of statistical dependencies. A flexible tool to be employed in solving these problems is the method of forming Boolean variables with restrictions. In this way, components with more than two states can be treated, the possibilities of forming modules expanded, and statistical dependencies treated. Part 3 contains descriptions of the MUSTAFA, MUSTAMO, PASPI, and SIMUST computer programs based on these methods. (orig./HP) [de

  11. The 2009 MW MW 6.1 L'Aquila fault system imaged by 64k earthquake locations

    International Nuclear Information System (INIS)

    Valoroso, Luisa

    2016-01-01

    On April 6 2009, a MW 6.1 normal-faulting earthquake struck the axial area of the Abruzzo region in central Italy. We investigate the complex architecture and mechanics of the activated fault system by using 64k high-resolution foreshock and aftershock locations. The fault system is composed by two major SW dipping segments forming an en-echelon NW trending system about 50 km long: the high-angle L’Aquila fault and the listric Campotosto fault, located in the first 10 km depth. From the beginning of 2009, fore shocks activated the deepest portion of the main shock fault. A week before the MW 6.1 event, the largest (MW 4.0) foreshock triggered seismicity migration along a minor off-fault segment. Seismicity jumped back to the main plane a few hours before the main shock. High-precision locations allowed to peer into the fault zone showing complex geological structures from the metre to the kilometre scale, analogous to those observed by field studies and seismic profiles. Also, we were able to investigate important aspects of earthquakes nucleation and propagation through the upper crust in carbonate-bearing rocks such as: the role of fluids in normal-faulting earthquakes; how crustal faults terminate at depths; the key role of fault zone structure in the earthquake rupture evolution processes.

  12. A seismic fault recognition method based on ant colony optimization

    Science.gov (United States)

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  13. Fault diagnosis in spur gears based on genetic algorithm and random forest

    Science.gov (United States)

    Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan

    2016-03-01

    There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.

  14. Objective Function and Learning Algorithm for the General Node Fault Situation.

    Science.gov (United States)

    Xiao, Yi; Feng, Rui-Bin; Leung, Chi-Sing; Sum, John

    2016-04-01

    Fault tolerance is one interesting property of artificial neural networks. However, the existing fault models are able to describe limited node fault situations only, such as stuck-at-zero and stuck-at-one. There is no general model that is able to describe a large class of node fault situations. This paper studies the performance of faulty radial basis function (RBF) networks for the general node fault situation. We first propose a general node fault model that is able to describe a large class of node fault situations, such as stuck-at-zero, stuck-at-one, and the stuck-at level being with arbitrary distribution. Afterward, we derive an expression to describe the performance of faulty RBF networks. An objective function is then identified from the formula. With the objective function, a training algorithm for the general node situation is developed. Finally, a mean prediction error (MPE) formula that is able to estimate the test set error of faulty networks is derived. The application of the MPE formula in the selection of basis width is elucidated. Simulation experiments are then performed to demonstrate the effectiveness of the proposed method.

  15. Using tensorial electrical resistivity survey to locate fault systems

    International Nuclear Information System (INIS)

    Monteiro Santos, Fernando A; Plancha, João P; Marques, Jorge; Perea, Hector; Cabral, João; Massoud, Usama

    2009-01-01

    This paper deals with the use of the tensorial resistivity method for fault orientation and macroanisotropy characterization. The rotational properties of the apparent resistivity tensor are presented using 3D synthetic models representing structures with a dominant direction of low resistivity and vertical discontinuities. It is demonstrated that polar diagrams of the elements of the tensor are effective in delineating those structures. As the apparent resistivity tensor shows great inefficacy in investigating the depth of the structures, it is advised to accomplish tensorial surveys with the application of other geophysical methods. An experimental example, including tensorial, dipole–dipole and time domain surveys, is presented to illustrate the potentiality of the method. The dipole–dipole model shows high-resistivity contrasts which were interpreted as corresponding to faults crossing the area. The results from the time domain electromagnetic (TEM) sounding show high-resistivity values till depths of 40–60 m at the north part of the area. In the southern part of the survey area the soundings show an upper layer with low-resistivity values (around 30 Ω m) followed by a more resistive bedrock (resistivity >100 Ω m) at a depth ranging from 15 to 30 m. The soundings in the central part of the survey area show more variability. A thin conductive overburden is followed by a more resistive layer with resistivity in the range of 80–1800 Ω m. The north and south limits of the central part of the area as revealed by TEM survey are roughly E–W oriented and coincident with the north fault scarp and the southernmost fault detected by the dipole–dipole survey. The pattern of the polar diagrams calculated from tensorial resistivity data clearly indicates the presence of a contact between two blocks at south of the survey area with the low-resistivity block located southwards. The presence of other two faults is not so clear from the polar diagram patterns, but

  16. Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Song

    2014-01-01

    Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.

  17. Stochastic Resonance algorithms to enhance damage detection in bearing faults

    Directory of Open Access Journals (Sweden)

    Castiglione Roberto

    2015-01-01

    Full Text Available Stochastic Resonance is a phenomenon, studied and mainly exploited in telecommunication, which permits the amplification and detection of weak signals by the assistance of noise. The first papers on this technique are dated early 80 s and were developed to explain the periodically recurrent ice ages. Other applications mainly concern neuroscience, biology, medicine and obviously signal analysis and processing. Recently, some researchers have applied the technique for detecting faults in mechanical systems and bearings. In this paper, we try to better understand the conditions of applicability and which is the best algorithm to be adopted for these purposes. In fact, to get the methodology profitable and efficient to enhance the signal spikes due to fault in rings and balls/rollers of bearings, some parameters have to be properly selected. This is a problem since in system identification this procedure should be as blind as possible. Two algorithms are analysed: the first exploits classical SR with three parameters mutually dependent, while the other uses Woods-Saxon potential, with three parameters yet but holding a different meaning. The comparison of the performances of the two algorithms and the optimal choice of their parameters are the scopes of this paper. Algorithms are tested on simulated and experimental data showing an evident capacity of increasing the signal to noise ratio.

  18. Improved algorithms for circuit fault diagnosis based on wavelet packet and neural network

    International Nuclear Information System (INIS)

    Zhang, W-Q; Xu, C

    2008-01-01

    In this paper, two improved BP neural network algorithms of fault diagnosis for analog circuit are presented through using optimal wavelet packet transform(OWPT) or incomplete wavelet packet transform(IWPT) as preprocessor. The purpose of preprocessing is to reduce the nodes in input layer and hidden layer of BP neural network, so that the neural network gains faster training and convergence speed. At first, we apply OWPT or IWPT to the response signal of circuit under test(CUT), and then calculate the normalization energy of each frequency band. The normalization energy is used to train the BP neural network to diagnose faulty components in the analog circuit. These two algorithms need small network size, while have faster learning and convergence speed. Finally, simulation results illustrate the two algorithms are effective for fault diagnosis

  19. Characterizing the structural maturity of fault zones using high-resolution earthquake locations.

    Science.gov (United States)

    Perrin, C.; Waldhauser, F.; Scholz, C. H.

    2017-12-01

    We use high-resolution earthquake locations to characterize the three-dimensional structure of active faults in California and how it evolves with fault structural maturity. We investigate the distribution of aftershocks of several recent large earthquakes that occurred on immature faults (i.e., slow moving and small cumulative displacement), such as the 1992 (Mw7.3) Landers and 1999 (Mw7.1) Hector Mine events, and earthquakes that occurred on mature faults, such as the 1984 (Mw6.2) Morgan Hill and 2004 (Mw6.0) Parkfield events. Unlike previous studies which typically estimated the width of fault zones from the distribution of earthquakes perpendicular to the surface fault trace, we resolve fault zone widths with respect to the 3D fault surface estimated from principal component analysis of local seismicity. We find that the zone of brittle deformation around the fault core is narrower along mature faults compared to immature faults. We observe a rapid fall off of the number of events at a distance range of 70 - 100 m from the main fault surface of mature faults (140-200 m fault zone width), and 200-300 m from the fault surface of immature faults (400-600 m fault zone width). These observations are in good agreement with fault zone widths estimated from guided waves trapped in low velocity damage zones. The total width of the active zone of deformation surrounding the main fault plane reach 1.2 km and 2-4 km for mature and immature faults, respectively. The wider zone of deformation presumably reflects the increased heterogeneity in the stress field along complex and discontinuous faults strands that make up immature faults. In contrast, narrower deformation zones tend to align with well-defined fault planes of mature faults where most of the deformation is concentrated. Our results are in line with previous studies suggesting that surface fault traces become smoother, and thus fault zones simpler, as cumulative fault slip increases.

  20. Methodology for selection of attributes and operating conditions for SVM-Based fault locator's

    Directory of Open Access Journals (Sweden)

    Debbie Johan Arredondo Arteaga

    2017-01-01

    Full Text Available Context: Energy distribution companies must employ strategies to meet their timely and high quality service, and fault-locating techniques represent and agile alternative for restoring the electric service in the power distribution due to the size of distribution services (generally large and the usual interruptions in the service. However, these techniques are not robust enough and present some limitations in both computational cost and the mathematical description of the models they use. Method: This paper performs an analysis based on a Support Vector Machine for the evaluation of the proper conditions to adjust and validate a fault locator for distribution systems; so that it is possible to determine the minimum number of operating conditions that allow to achieve a good performance with a low computational effort. Results: We tested the proposed methodology in a prototypical distribution circuit, located in a rural area of Colombia. This circuit has a voltage of 34.5 KV and is subdivided in 20 zones. Additionally, the characteristics of the circuit allowed us to obtain a database of 630.000 records of single-phase faults and different operating conditions. As a result, we could determine that the locator showed a performance above 98% with 200 suitable selected operating conditions. Conclusions: It is possible to improve the performance of fault locators based on Support Vector Machine. Specifically, these improvements are achieved by properly selecting optimal operating conditions and attributes, since they directly affect the performance in terms of efficiency and the computational cost.

  1. Precise tremor source locations and amplitude variations along the lower-crustal central San Andreas Fault

    Science.gov (United States)

    Shelly, David R.; Hardebeck, Jeanne L.

    2010-01-01

    We precisely locate 88 tremor families along the central San Andreas Fault using a 3D velocity model and numerous P and S wave arrival times estimated from seismogram stacks of up to 400 events per tremor family. Maximum tremor amplitudes vary along the fault by at least a factor of 7, with by far the strongest sources along a 25 km section of the fault southeast of Parkfield. We also identify many weaker tremor families, which have largely escaped prior detection. Together, these sources extend 150 km along the fault, beneath creeping, transitional, and locked sections of the upper crustal fault. Depths are mostly between 18 and 28 km, in the lower crust. Epicenters are concentrated within 3 km of the surface trace, implying a nearly vertical fault. A prominent gap in detectible activity is located directly beneath the region of maximum slip in the 2004 magnitude 6.0 Parkfield earthquake.

  2. Fault Diagnosis of Power System Based on Improved Genetic Optimized BP-NN

    Directory of Open Access Journals (Sweden)

    Yuan Pu

    2015-01-01

    Full Text Available BP neural network (Back-Propagation Neural Network, BP-NN is one of the most widely neural network models and is applied to fault diagnosis of power system currently. BP neural network has good self-learning and adaptive ability and generalization ability, but the operation process is easy to fall into local minima. Genetic algorithm has global optimization features, and crossover is the most important operation of the Genetic Algorithm. In this paper, we can modify the crossover of traditional Genetic Algorithm, using improved genetic algorithm optimized BP neural network training initial weights and thresholds, to avoid the problem of BP neural network fall into local minima. The results of analysis by an example, the method can efficiently diagnose network fault location, and improve fault-tolerance and grid fault diagnosis effect.

  3. Performance Estimation and Fault Diagnosis Based on Levenberg–Marquardt Algorithm for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Junjie Lu

    2018-01-01

    Full Text Available Establishing the schemes of accurate and computationally efficient performance estimation and fault diagnosis for turbofan engines has become a new research focus and challenges. It is able to increase reliability and stability of turbofan engine and reduce the life cycle costs. Accurate estimation of turbofan engine performance counts on thoroughly understanding the components’ performance, which is described by component characteristic maps and the fault of each component can be regarded as the change of characteristic maps. In this paper, a novel method based on a Levenberg–Marquardt (LM algorithm is proposed to enhance the fidelity of the performance estimation and the credibility of the fault diagnosis for the turbofan engine. The presented method utilizes the LM algorithm to figure out the operating point in the characteristic maps, preparing for performance estimation and fault diagnosis. The accuracy of the proposed method is evaluated for estimating performance parameters in the transient case with Rayleigh process noise and Gaussian measurement noise. The comparison among the extended Kalman filter (EKF method, the particle filter (PF method and the proposed method is implemented in the abrupt fault case and the gradual degeneration case and it has been shown that the proposed method has the capability to lead to more accurate result for performance estimation and fault diagnosis of turbofan engine than current popular EKF and PF diagnosis methods.

  4. An automatic procedure for high-resolution earthquake locations: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    Science.gov (United States)

    Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Latorre, Diana; Piccinini, Davide

    2014-05-01

    The characterization of the geometry, kinematics and rheology of fault zones by seismological data depends on our capability of accurately locate the largest number of low-magnitude seismic events. To this aim, we have been working for the past three years to develop an advanced modular earthquake location procedure able to automatically retrieve high-resolution earthquakes catalogues directly from continuous waveforms data. We use seismograms recorded at about 60 seismic stations located both at surface and at depth. The network covers an area of about 80x60 km with a mean inter-station distance of 6 km. These stations are part of a Near fault Observatory (TABOO; http://taboo.rm.ingv.it/), consisting of multi-sensor stations (seismic, geodetic, geochemical and electromagnetic). This permanent scientific infrastructure managed by the INGV is devoted to studying the earthquakes preparatory phase and the fast/slow (i.e., seismic/aseismic) deformation process active along the Alto Tiberina fault (ATF) located in the northern Apennines (Italy). The ATF is potentially one of the rare worldwide examples of active low-angle (picking procedure that provides consistently weighted P- and S-wave arrival times, P-wave first motion polarities and the maximum waveform amplitude for local magnitude calculation; iii) both linearized iterative and non-linear global-search earthquake location algorithms to compute accurate absolute locations of single-events in a 3D geological model (see Latorre et al. same session); iv) cross-correlation and double-difference location methods to compute high-resolution relative event locations. This procedure is now running off-line with a delay of 1 week to the real-time. We are now implementing this procedure to obtain high-resolution double-difference earthquake locations in real-time (DDRT). We show locations of ~30k low-magnitude earthquakes recorded during the past 4 years (2010-2013) of network operation, reaching a completeness magnitude of

  5. Precise Relative Location of San Andreas Fault Tremors Near Cholame, CA, Using Seismometer Clusters: Slip on the Deep Extension of the Fault?

    Science.gov (United States)

    Shelly, D. R.; Ellsworth, W. L.; Ryberg, T.; Haberland, C.; Fuis, G.; Murphy, J.; Nadeau, R.; Bürgmann, R.

    2008-12-01

    Non-volcanic tremor, similar in character to that generated at some subduction zones, was recently identified beneath the strike-slip San Andreas Fault (SAF) in central California (Nadeau and Dolenc, 2005). Using a matched filter method, we closely examine a 24-hour period of active SAF tremor and show that, like tremor in the Nankai Trough subduction zone, this tremor is composed of repeated similar events. We take advantage of this similarity to locate detected similar events relative to several chosen events. While low signal-to-noise makes location challenging, we compensate for this by estimating event-pair differential times at 'clusters' of nearby temporary and permanent stations rather than at single stations. We find that the relative locations consistently form a near-linear structure in map view, striking parallel to the surface trace of the SAF. Therefore, we suggest that at least a portion of the tremor occurs on the deep extension of the fault, similar to the situation for subduction zone tremor. Also notable is the small depth range (a few hundred meters or less) of many of the located tremors, a feature possibly analogous to earthquake streaks observed on the shallower portion of the fault. The close alignment of the tremor with the SAF slip orientation suggests a shear slip mechanism, as has been argued for subduction tremor. At times, we observe a clear migration of the tremor source along the fault, at rates of 15-40 km/hr.

  6. Planetary Gearbox Fault Detection Using Vibration Separation Techniques

    Science.gov (United States)

    Lewicki, David G.; LaBerge, Kelsen E.; Ehinger, Ryan T.; Fetty, Jason

    2011-01-01

    Studies were performed to demonstrate the capability to detect planetary gear and bearing faults in helicopter main-rotor transmissions. The work supported the Operations Support and Sustainment (OSST) program with the U.S. Army Aviation Applied Technology Directorate (AATD) and Bell Helicopter Textron. Vibration data from the OH-58C planetary system were collected on a healthy transmission as well as with various seeded-fault components. Planetary fault detection algorithms were used with the collected data to evaluate fault detection effectiveness. Planet gear tooth cracks and spalls were detectable using the vibration separation techniques. Sun gear tooth cracks were not discernibly detectable from the vibration separation process. Sun gear tooth spall defects were detectable. Ring gear tooth cracks were only clearly detectable by accelerometers located near the crack location or directly across from the crack. Enveloping provided an effective method for planet bearing inner- and outer-race spalling fault detection.

  7. Gravity Field Interpretation for Major Fault Depth Detection in a Region Located SW- Qa’im / Iraq

    Directory of Open Access Journals (Sweden)

    Wadhah Mahmood Shakir Al-Khafaji

    2017-09-01

    Full Text Available This research deals with the qualitative and quantitative interpretation of Bouguer gravity anomaly data for a region located to the SW of Qa’im City within Anbar province by using 2D- mapping methods. The gravity residual field obtained graphically by subtracting the Regional Gravity values from the values of the total Bouguer anomaly. The residual gravity field processed in order to reduce noise by applying the gradient operator and 1st directional derivatives filtering. This was helpful in assigning the locations of sudden variation in Gravity values. Such variations may be produced by subsurface faults, fractures, cavities or subsurface facies lateral variations limits. A major fault was predicted to extend with the direction NE-SW. This fault is mentioned by previous studies as undefined subsurface fault depth within the sedimentary cover rocks. The results of this research that were obtained by gravity quantitative interpretation find that the depth to this major fault plane center is about 2.4 Km.

  8. Redundant and fault-tolerant algorithms for real-time measurement and control systems for weapon equipment.

    Science.gov (United States)

    Li, Dan; Hu, Xiaoguang

    2017-03-01

    Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    Science.gov (United States)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  10. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  11. Location of Faults in Power Transmission Lines Using the ARIMA Method

    Directory of Open Access Journals (Sweden)

    Danilo Pinto Moreira de Souza

    2017-10-01

    Full Text Available One of the major problems in transmission lines is the occurrence of failures that affect the quality of the electric power supplied, as the exact localization of the fault must be known for correction. In order to streamline the work of maintenance teams and standardize services, this paper proposes a method of locating faults in power transmission lines by analyzing the voltage oscillographic signals extracted at the line monitoring terminals. The developed method relates time series models obtained specifically for each failure pattern. The parameters of the autoregressive integrated moving average (ARIMA model are estimated in order to adjust the voltage curves and calculate the distance from the initial fault localization to the terminals. Simulations of the failures are performed through the ATPDraw ® (5.5 software and the analyses were completed using the RStudio ® (1.0.143 software. The results obtained with respect to the failures, which did not involve earth return, were satisfactory when compared with widely used techniques in the literature, particularly when the fault distance became larger in relation to the beginning of the transmission line.

  12. A Diagnosis Method for Rotation Machinery Faults Based on Dimensionless Indexes Combined with K-Nearest Neighbor Algorithm

    Directory of Open Access Journals (Sweden)

    Jianbin Xiong

    2015-01-01

    Full Text Available It is difficult to well distinguish the dimensionless indexes between normal petrochemical rotating machinery equipment and those with complex faults. When the conflict of evidence is too big, it will result in uncertainty of diagnosis. This paper presents a diagnosis method for rotation machinery fault based on dimensionless indexes combined with K-nearest neighbor (KNN algorithm. This method uses a KNN algorithm and an evidence fusion theoretical formula to process fuzzy data, incomplete data, and accurate data. This method can transfer the signals from the petrochemical rotating machinery sensors to the reliability manners using dimensionless indexes and KNN algorithm. The input information is further integrated by an evidence synthesis formula to get the final data. The type of fault will be decided based on these data. The experimental results show that the proposed method can integrate data to provide a more reliable and reasonable result, thereby reducing the decision risk.

  13. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  14. Method of fault diagnosis in nuclear power plant base on genetic algorithm and knowledge base

    International Nuclear Information System (INIS)

    Zhou Yangping; Zhao Bingquan

    2000-01-01

    Via using the knowledge base, combining Genetic Algorithm and classical probability and contraposing the characteristic of the fault diagnosis of NPP. The authors put forward a method of fault diagnosis. In the process of fault diagnosis, this method contact the state of NPP with the colony in GA and transform the colony to get the individual that adapts to the condition. On the 950MW full size simulator in Beijing NPP simulation training center, experimentation shows it has comparative adaptability to the imperfection of expert knowledge, illusive signal and other instance

  15. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    Directory of Open Access Journals (Sweden)

    Mehdi Shadaram

    2010-10-01

    Full Text Available In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  16. Mixed-fault diagnosis in induction motors considering varying load and broken bars location

    International Nuclear Information System (INIS)

    Faiz, Jawad; Ebrahimi, Bashir Mahdi; Toliyat, H.A.; Abu-Elhaija, W.S.

    2010-01-01

    Simultaneous static eccentricity and broken rotor bars faults, called mixed-fault, in a three-phase squirrel-cage induction motor is analyzed by time stepping finite element method using fast Fourier transform. Generally, there is an inherent static eccentricity (below 10%) in a broken rotor bar induction motor and therefore study of the mixed-fault case could be considered as a real case. Stator current frequency spectrum over low frequencies, medium frequencies and high frequencies are analyzed; static eccentricity diagnosis and its distinguishing from the rotor bars breakage in the mixed-fault case are described. The contribution of the static eccentricity and broken rotor bars faults are precisely determined. Influence of the broken bars location upon the amplitudes of the harmonics due to the mixed-fault is also investigated. It is shown that the amplitudes of harmonics due to broken bars placed on one pole are larger than the case in which the broken bars are distributed on different poles. In addition, influence of varying load on the amplitudes of the harmonics due to the mixed-fault is studied and indicated that the higher load increases the harmonics components amplitudes due to the broken bars while the static eccentricity degree decreases. Simulation results are confirmed by the experimental results.

  17. Determining on-fault earthquake magnitude distributions from integer programming

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2018-01-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions. 

  18. Seismic Experiment at North Arizona To Locate Washington Fault - 3D Field Test

    KAUST Repository

    Hanafy, Sherif M

    2008-10-01

    No. of receivers in the inline direction: 80, Number of lines: 6, Receiver Interval: 1 m near the fault, 2 m away from the fault (Receivers 1 to 12 at 2 m intervals, receivers 12 to 51 at 1 m intervals, and receivers 51 to 80 at 2 m intervals), No. of shots in the inline direction: 40, Shot interval: 2 and 4 m (every other receiver location). Data Recording The data are recorded using two Bison equipment, each is 120 channels. We shot at all 240 shot locations and simultaneously recorded seismic traces at receivers 1 to 240 (using both Bisons), then we shot again at all 240 shot locations and we recorded at receivers 241 to 480. The data is rearranged to match the receiver order shown in Figure 3 where receiver 1 is at left-lower corner, receivers increase to 80 at right lower corner, then receiver 81 is back to left side at Y = 1.5 m, etc.

  19. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  20. Research on fault diagnosis of nuclear power plants based on genetic algorithms and fuzzy logic

    International Nuclear Information System (INIS)

    Zhou Yangping; Zhao Bingquan

    2001-01-01

    Based on genetic algorithms and fuzzy logic and using expert knowledge, mini-knowledge tree model and standard signals from simulator, a new fuzzy-genetic method is developed to fault diagnosis in nuclear power plants. A new replacement method of genetic algorithms is adopted. Fuzzy logic is used to calculate the fitness of the strings in genetic algorithms. Experiments on the simulator show it can deal with the uncertainty and the fuzzy factor

  1. Power flow analysis and optimal locations of resistive type superconducting fault current limiters.

    Science.gov (United States)

    Zhang, Xiuchang; Ruiz, Harold S; Geng, Jianzhao; Shen, Boyang; Fu, Lin; Zhang, Heng; Coombs, Tim A

    2016-01-01

    Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E - J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.

  2. An adaptive Phase-Locked Loop algorithm for faster fault ride through performance of interconnected renewable energy sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2013-01-01

    Interconnected renewable energy sources require fast and accurate fault ride through operation in order to support the power grid when faults occur. This paper proposes an adaptive Phase-Locked Loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response of the grid...... side converter control of a renewable energy source, especially under fault ride through operation. The adaptive dαβPLL is based on modifying the control parameters of the dαβPLL according to the type and voltage characteristic of the grid fault with the purpose of accelerating the performance...

  3. Fault diagnosis for wind turbine planetary ring gear via a meshing resonance based filtering algorithm.

    Science.gov (United States)

    Wang, Tianyang; Chu, Fulei; Han, Qinkai

    2017-03-01

    Identifying the differences between the spectra or envelope spectra of a faulty signal and a healthy baseline signal is an efficient planetary gearbox local fault detection strategy. However, causes other than local faults can also generate the characteristic frequency of a ring gear fault; this may further affect the detection of a local fault. To address this issue, a new filtering algorithm based on the meshing resonance phenomenon is proposed. In detail, the raw signal is first decomposed into different frequency bands and levels. Then, a new meshing index and an MRgram are constructed to determine which bands belong to the meshing resonance frequency band. Furthermore, an optimal filter band is selected from this MRgram. Finally, the ring gear fault can be detected according to the envelope spectrum of the band-pass filtering result. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2012-01-01

    Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  5. An efficient diagnostic technique for distribution systems based on under fault voltages and currents

    Energy Technology Data Exchange (ETDEWEB)

    Campoccia, A.; Di Silvestre, M.L.; Incontrera, I.; Riva Sanseverino, E. [Dipartimento di Ingegneria Elettrica elettronica e delle Telecomunicazioni, Universita degli Studi di Palermo, viale delle Scienze, 90128 Palermo (Italy); Spoto, G. [Centro per la Ricerca Elettronica in Sicilia, Monreale, Via Regione Siciliana 49, 90046 Palermo (Italy)

    2010-10-15

    Service continuity is one of the major aspects in the definition of the quality of the electrical energy, for this reason the research in the field of faults diagnostic for distribution systems is spreading ever more. Moreover the increasing interest around modern distribution systems automation for management purposes gives faults diagnostics more tools to detect outages precisely and in short times. In this paper, the applicability of an efficient fault location and characterization methodology within a centralized monitoring system is discussed. The methodology, appropriate for any kind of fault, is based on the use of the analytical model of the network lines and uses the fundamental components rms values taken from the transient measures of line currents and voltages at the MV/LV substations. The fault location and identification algorithm, proposed by the authors and suitably restated, has been implemented on a microprocessor-based device that can be installed at each MV/LV substation. The speed and precision of the algorithm have been tested against the errors deriving from the fundamental extraction within the prescribed fault clearing times and against the inherent precision of the electronic device used for computation. The tests have been carried out using Matlab Simulink for simulating the faulted system. (author)

  6. Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations

    Science.gov (United States)

    Dokuz, A. S.; Celik, M.

    2017-11-01

    Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  7. FAST SS-ILM: A COMPUTATIONALLY EFFICIENT ALGORITHM TO DISCOVER SOCIALLY IMPORTANT LOCATIONS

    Directory of Open Access Journals (Sweden)

    A. S. Dokuz

    2017-11-01

    Full Text Available Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  8. The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    He Yan

    2016-01-01

    Full Text Available The particle swarm optimization (PSO is an optimization algorithm based on intelligent optimization. Parameters selection of PSO will play an important role in performance and efficiency of the algorithm. In this paper, the performance of PSO is analyzed when the control parameters vary, including particle number, accelerate constant, inertia weight and maximum limited velocity. And then PSO with dynamic parameters has been applied on the neural network training for gearbox fault diagnosis, the results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed to improve the performance of PSO.

  9. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    International Nuclear Information System (INIS)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine; Minca, Eugenia; Filip, Florin

    2009-01-01

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  10. A New Method Presentation for Fault Location in Power Transformers

    OpenAIRE

    Hossein Mohammadpour; Rahman Dashti

    2011-01-01

    Power transformers are among the most important and expensive equipments in the electric power systems. Consequently the transformer protection is an essential part of the system protection. This paper presents a new method for locating transformer winding faults such as turn-to-turn, turn-to-core, turn-totransformer body, turn-to-earth, and high voltage winding to low voltage winding. In this study the current and voltage signals of input and output terminals of the tran...

  11. Groundwater penetrating radar and high resolution seismic for locating shallow faults in unconsolidated sediments

    International Nuclear Information System (INIS)

    Wyatt, D.E.

    1993-01-01

    Faults in shallow, unconsolidated sediments, particularly in coastal plain settings, are very difficult to discern during subsurface exploration yet have critical impact to groundwater flow, contaminant transport and geotechnical evaluations. This paper presents a case study using cross-over geophysical technologies in an area where shallow faulting is probable and known contamination exists. A comparison is made between Wenner and dipole-dipole resistivity data, ground penetrating radar, and high resolution seismic data. Data from these methods were verified with a cone penetrometer investigation for subsurface lithology and compared to existing monitoring well data. Interpretations from these techniques are compared with actual and theoretical shallow faulting found in the literature. The results of this study suggests that (1) the CPT study, combined with the monitoring well data may suggest that discontinuities in correlatable zones may indicate that faulting is present (2) the addition of the Wenner and dipole-dipole data may further suggest that offset zones exist in the shallow subsurface but not allow specific fault planes or fault stranding to be mapped (3) the high resolution seismic data will image faults to within a few feet of the surface but does not have the resolution to identify the faulting on the scale of our models, however it will suggest locations for upward continuation of faulted zones (4) offset 100 MHz and 200 MHz CMP GPR will image zones and features that may be fault planes and strands similar to our models (5) 300 MHz GPR will image higher resolution features that may suggest the presence of deeper faults and strands, and (6) the combination of all of the tools in this study, particularly the GPR and seismic may allow for the mapping of small scale, shallow faulting in unconsolidated sediments

  12. A Cubature-Principle-Assisted IMM-Adaptive UKF Algorithm for Maneuvering Target Tracking Caused by Sensor Faults

    Directory of Open Access Journals (Sweden)

    Huan Zhou

    2017-09-01

    Full Text Available Aimed at solving the problem of decreased filtering precision while maneuvering target tracking caused by non-Gaussian distribution and sensor faults, we developed an efficient interacting multiple model-unscented Kalman filter (IMM-UKF algorithm. By dividing the IMM-UKF into two links, the algorithm introduces the cubature principle to approximate the probability density of the random variable, after the interaction, by considering the external link of IMM-UKF, which constitutes the cubature-principle-assisted IMM method (CPIMM for solving the non-Gaussian problem, and leads to an adaptive matrix to balance the contribution of the state. The algorithm provides filtering solutions by considering the internal link of IMM-UKF, which is called a new adaptive UKF algorithm (NAUKF to address sensor faults. The proposed CPIMM-NAUKF is evaluated in a numerical simulation and two practical experiments including one navigation experiment and one maneuvering target tracking experiment. The simulation and experiment results show that the proposed CPIMM-NAUKF has greater filtering precision and faster convergence than the existing IMM-UKF. The proposed algorithm achieves a very good tracking performance, and will be effective and applicable in the field of maneuvering target tracking.

  13. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems.

    Directory of Open Access Journals (Sweden)

    Hajara Idris

    Full Text Available The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user's Quality of Service (QoS requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user's QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time.

  14. One Terminal Digital Algorithm for Adaptive Single Pole Auto-Reclosing Based on Zero Sequence Voltage

    Directory of Open Access Journals (Sweden)

    S. Jamali

    2008-10-01

    Full Text Available This paper presents an algorithm for adaptive determination of the dead timeduring transient arcing faults and blocking automatic reclosing during permanent faults onoverhead transmission lines. The discrimination between transient and permanent faults ismade by the zero sequence voltage measured at the relay point. If the fault is recognised asan arcing one, then the third harmonic of the zero sequence voltage is used to evaluate theextinction time of the secondary arc and to initiate reclosing signal. The significantadvantage of this algorithm is that it uses an adaptive threshold level and therefore itsperformance is independent of fault location, line parameters and the system operatingconditions. The proposed algorithm has been successfully tested under a variety of faultlocations and load angles on a 400KV overhead line using Electro-Magnetic TransientProgram (EMTP. The test results validate the algorithm ability in determining thesecondary arc extinction time during transient faults as well as blocking unsuccessfulautomatic reclosing during permanent faults.

  15. Design and development of an automated D.C. ground fault detection and location system for Cirus

    International Nuclear Information System (INIS)

    Marik, S.K.; Ramesh, N.; Jain, J.K.; Srivastava, A.P.

    2002-01-01

    Full text: The original design of Cirus safety system provided for automatic detection of ground fault in class I D.C. power supply system and its annunciation followed by delayed reactor trip. Identification of a faulty section was required to be done manually by switching off various sections one at a time thus requiring a lot of shutdown time to identify the faulty section. Since class I power supply is provided for safety control system, quick detection and location of ground faults in this supply is necessary as these faults have potential to bypass safety interlocks and hence the need for a new system for automatic location of a faulty section. Since such systems are not readily available in the market, in-house efforts were made to design and develop a plant-specific system, which has been installed and commissioned

  16. Application of the Goertzel’s algorithm in the airgap mixed eccentricity fault detection

    Directory of Open Access Journals (Sweden)

    Reljić Dejan

    2015-01-01

    Full Text Available In this paper, a suitable method for the on-line detection of the airgap mixed eccentricity fault in a three-phase cage induction motor has been proposed. The method is based on a Motor Current Signature Analysis (MCSA approach, a technique that is often used for an induction motor condition monitoring and fault diagnosis. It is based on the spectral analysis of the stator line current signal and the frequency identification of specific components, which are created as a result of motor faults. The most commonly used method for the current signal spectral analysis is based on the Fast Fourier transform (FFT. However, due to the complexity and memory demands, the FFT algorithm is not always suitable for real-time systems. Instead of the whole spectrum analysis, this paper suggests only the spectral analysis on the expected airgap fault frequencies employing the Goertzel’s algorithm to predict the magnitude of these frequency components. The method is simple and can be implemented in real-time airgap mixed eccentricity monitoring systems without much computational effort. A low-cost data acquisition system, supported by the LabView software, has been used for the hardware and software implementation of the proposed method. The method has been validated by the laboratory experiments on both the line-connected and the inverter-fed three-phase fourpole cage induction motor operated at the rated frequency and under constant load at a few different values. In addition, the results of the proposed method have been verified through the motor’s vibration signal analysis. [Projekat Ministarstva nauke Republike Srbije, br. III42004

  17. Locating and decoding barcodes in fuzzy images captured by smart phones

    Science.gov (United States)

    Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.

  18. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    Science.gov (United States)

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.

    Science.gov (United States)

    Wang, Tao; Xie, Wenfang; Zhang, Youmin

    2012-05-01

    In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Radar Determination of Fault Slip and Location in Partially Decorrelated Images

    Science.gov (United States)

    Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Stough, Timothy; Pierce, Marlon; Wang, Jun

    2017-06-01

    Faced with the challenge of thousands of frames of radar interferometric images, automated feature extraction promises to spur data understanding and highlight geophysically active land regions for further study. We have developed techniques for automatically determining surface fault slip and location using deformation images from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), which is similar to satellite-based SAR but has more mission flexibility and higher resolution (pixels are approximately 7 m). This radar interferometry provides a highly sensitive method, clearly indicating faults slipping at levels of 10 mm or less. But interferometric images are subject to decorrelation between revisit times, creating spots of bad data in the image. Our method begins with freely available data products from the UAVSAR mission, chiefly unwrapped interferograms, coherence images, and flight metadata. The computer vision techniques we use assume no data gaps or holes; so a preliminary step detects and removes spots of bad data and fills these holes by interpolation and blurring. Detected and partially validated surface fractures from earthquake main shocks, aftershocks, and aseismic-induced slip are shown for faults in California, including El Mayor-Cucapah (M7.2, 2010), the Ocotillo aftershock (M5.7, 2010), and South Napa (M6.0, 2014). Aseismic slip is detected on the San Andreas Fault from the El Mayor-Cucapah earthquake, in regions of highly patterned partial decorrelation. Validation is performed by comparing slip estimates from two interferograms with published ground truth measurements.

  2. EXPERIMENT BASED FAULT DIAGNOSIS ON BOTTLE FILLING PLANT WITH LVQ ARTIFICIAL NEURAL NETWORK ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mustafa DEMETGÜL

    2008-01-01

    Full Text Available In this study, an artificial neural network is developed to find an error rapidly on pneumatic system. Also the ANN prevents the system versus the failure. The error on the experimental bottle filling plant can be defined without any interference using analog values taken from pressure sensors and linear potentiometers. The sensors and potentiometers are placed on different places of the plant. Neural network diagnosis faults on plant, where no bottle, cap closing cylinder B is not working, bottle cap closing cylinder C is not working, air pressure is not sufficient, water is not filling and low air pressure faults. The fault is diagnosed by artificial neural network with LVQ. It is possible to find an failure by using normal programming or PLC. The reason offing Artificial Neural Network is to give a information where the fault is. However, ANN can be used for different systems. The aim is to find the fault by using ANN simultaneously. In this situation, the error taken place on the pneumatic system is collected by a data acquisition card. It is observed that the algorithm is very capable program for many industrial plants which have mechatronic systems.

  3. Fault Diagnosis for Actuators in a Class of Nonlinear Systems Based on an Adaptive Fault Detection Observer

    Directory of Open Access Journals (Sweden)

    Runxia Guo

    2016-01-01

    Full Text Available The problem of actuators’ fault diagnosis is pursued for a class of nonlinear control systems that are affected by bounded measurement noise and external disturbances. A novel fault diagnosis algorithm has been proposed by combining the idea of adaptive control theory and the approach of fault detection observer. The asymptotical stability of the fault detection observer is guaranteed by setting the adaptive adjusting law of the unknown fault vector. A theoretically rigorous proof of asymptotical stability has been given. Under the condition that random measurement noise generated by the sensors of control systems and external disturbances exist simultaneously, the designed fault diagnosis algorithm is able to successfully give specific estimated values of state variables and failures rather than just giving a simple fault warning. Moreover, the proposed algorithm is very simple and concise and is easy to be applied to practical engineering. Numerical experiments are carried out to evaluate the performance of the fault diagnosis algorithm. Experimental results show that the proposed diagnostic strategy has a satisfactory estimation effect.

  4. Fault Detection for Industrial Processes

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    2012-01-01

    Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

  5. Evaluation of the Location and Recency of Faulting Near Prospective Surface Facilities in Midway Valley, Nye County, Nevada

    Science.gov (United States)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2001-01-01

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  6. Evaluation of the location and recency of faulting near prospective surface facilities in Midway Valley, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2002-01-17

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  7. Evaluation of the location and recency of faulting near prospective surface facilities in Midway Valley, Nye County, Nevada

    International Nuclear Information System (INIS)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2002-01-01

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  8. Advanced features of the fault tree solver FTREX

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jae Joo

    2005-01-01

    This paper presents advanced features of a fault tree solver FTREX (Fault Tree Reliability Evaluation eXpert). Fault tree analysis is one of the most commonly used methods for the safety analysis of industrial systems especially for the probabilistic safety analysis (PSA) of nuclear power plants. Fault trees are solved by the classical Boolean algebra, conventional Binary Decision Diagram (BDD) algorithm, coherent BDD algorithm, and Bayesian networks. FTREX could optionally solve fault trees by the conventional BDD algorithm or the coherent BDD algorithm and could convert the fault trees into the form of the Bayesian networks. The algorithm based on the classical Boolean algebra solves a fault tree and generates MCSs. The conventional BDD algorithm generates a BDD structure of the top event and calculates the exact top event probability. The BDD structure is a factorized form of the prime implicants. The MCSs of the top event could be extracted by reducing the prime implicants in the BDD structure. The coherent BDD algorithm is developed to overcome the shortcomings of the conventional BDD algorithm such as the huge memory requirements and a long run time

  9. The Fault Detection, Localization, and Tolerant Operation of Modular Multilevel Converters with an Insulated Gate Bipolar Transistor (IGBT Open Circuit Fault

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-04-01

    Full Text Available Reliability is one of the critical issues for a modular multilevel converter (MMC since it consists of a large number of series-connected power electronics submodules (SMs. In this paper, a complete control strategy including fault detection, localization, and tolerant operation is proposed for the MMC under an insulated gate bipolar transistor (IGBT open circuit fault. According to the output characteristics of the SM with the open-circuit fault of IGBT, a fault detection method based on the circulating current and output current observation is used. In order to further precisely locate the position of the faulty SM, a fault localization method based on the SM capacitor voltage observation is developed. After the faulty SM is isolated, the continuous operation of the converter is ensured by adopting the fault-tolerant strategy based on the use of redundant modules. To verify the proposed fault detection, fault localization, and fault-tolerant operation strategies, a 900 kVA MMC system under the conditions of an IGBT open circuit is developed in the Matlab/Simulink platform. The capabilities of rapid detection, precise positioning, and fault-tolerant operation of the investigated detection and control algorithms are also demonstrated.

  10. Integrated geophysical investigations in a fault zone located on southwestern part of İzmir city, Western Anatolia, Turkey

    Science.gov (United States)

    Drahor, Mahmut G.; Berge, Meriç A.

    2017-01-01

    Integrated geophysical investigations consisting of joint application of various geophysical techniques have become a major tool of active tectonic investigations. The choice of integrated techniques depends on geological features, tectonic and fault characteristics of the study area, required resolution and penetration depth of used techniques and also financial supports. Therefore, fault geometry and offsets, sediment thickness and properties, features of folded strata and tectonic characteristics of near-surface sections of the subsurface could be thoroughly determined using integrated geophysical approaches. Although Ground Penetrating Radar (GPR), Electrical Resistivity Tomography (ERT) and Seismic Refraction Tomography (SRT) methods are commonly used in active tectonic investigations, other geophysical techniques will also contribute in obtaining of different properties in the complex geological environments of tectonically active sites. In this study, six different geophysical methods used to define faulting locations and characterizations around the study area. These are GPR, ERT, SRT, Very Low Frequency electromagnetic (VLF), magnetics and self-potential (SP). Overall integrated geophysical approaches used in this study gave us commonly important results about the near surface geological properties and faulting characteristics in the investigation area. After integrated interpretations of geophysical surveys, we determined an optimal trench location for paleoseismological studies. The main geological properties associated with faulting process obtained after trenching studies. In addition, geophysical results pointed out some indications concerning the active faulting mechanism in the area investigated. Consequently, the trenching studies indicate that the integrated approach of geophysical techniques applied on the fault problem reveals very useful and interpretative results in description of various properties of faulting zone in the investigation site.

  11. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  12. ALGORITHMIZATION OF PROBLEMS FOR OPTIMAL LOCATION OF TRANSFORMERS IN SUBSTATIONS OF DISTRIBUTED NETWORKS

    Directory of Open Access Journals (Sweden)

    M. I. Fursanov

    2014-01-01

    Full Text Available This article reflects algorithmization of search methods of effective replacement of consumer transformers in distributed electrical networks. As any electrical equipment of power systems, power transformers have their own limited service duration, which is determined by natural processes of materials degradation and also by unexpected wear under different conditions of overload and overvoltage. According to the standards, adapted by in the Republic of Belarus, rated service life of power transformers is 25 years. But it can be situations that transformers should be better changed till this time – economically efficient. The possibility of such replacement is considered in order to increase efficiency of electrical network operation connected with its physical wear and aging.In this article the faults of early developed mathematical models of transformers replacement were discussed. Early such worked out transformers were not used. But in practice they can be replaced in one substation but they can be successfully used  in other substations .Especially if there are limits of financial resources and the replacement needs more detail technical and economical basis.During the research the authors developed the efficient algorithm for determining of optimal location of transformers at substations of distributed electrical networks, based on search of the best solution from all sets of displacement in oriented graph. Suggested algorithm allows considerably reduce design time of optimal placement of transformers using a set of simplifications. The result of algorithm’s work is series displacement of transformers in networks, which allow obtain a great economic effect in comparison with replacement of single transformer.

  13. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    Science.gov (United States)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  14. A novel fault location scheme for power distribution system based on injection method and transient line voltage

    Science.gov (United States)

    Huang, Yuehua; Li, Xiaomin; Cheng, Jiangzhou; Nie, Deyu; Wang, Zhuoyuan

    2018-02-01

    This paper presents a novel fault location method by injecting travelling wave current. The new methodology is based on Time Difference Of Arrival(TDOA)measurement which is available measurements the injection point and the end node of main radial. In other words, TDOA is the maximum correlation time when the signal reflected wave crest of the injected and fault appear simultaneously. Then distance calculation is equal to the wave velocity multiplied by TDOA. Furthermore, in case of some transformers connected to the end of the feeder, it’s necessary to combine with the transient voltage comparison of amplitude. Finally, in order to verify the effectiveness of this method, several simulations have been undertaken by using MATLAB/SIMULINK software packages. The proposed fault location is useful to short the positioning time in the premise of ensuring the accuracy, besides the error is 5.1% and 13.7%.

  15. Fault-tolerant control for current sensors of doubly fed induction generators based on an improved fault detection method

    DEFF Research Database (Denmark)

    Li, Hui; Yang, Chao; Hu, Yaogang

    2014-01-01

    Fault-tolerant control of current sensors is studied in this paper to improve the reliability of a doubly fed induction generator (DFIG). A fault-tolerant control system of current sensors is presented for the DFIG, which consists of a new current observer and an improved current sensor fault...... detection algorithm, and fault-tolerant control system are investigated by simulation. The results indicate that the outputs of the observer and the sensor are highly coherent. The fault detection algorithm can efficiently detect both soft and hard faults in current sensors, and the fault-tolerant control...

  16. An Approximation Algorithm for the Facility Location Problem with Lexicographic Minimax Objective

    Directory of Open Access Journals (Sweden)

    Ľuboš Buzna

    2014-01-01

    Full Text Available We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve the p-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm.

  17. Mobility-Assisted on-Demand Routing Algorithm for MANETs in the Presence of Location Errors

    Directory of Open Access Journals (Sweden)

    Trung Kien Vu

    2014-01-01

    Full Text Available We propose a mobility-assisted on-demand routing algorithm for mobile ad hoc networks in the presence of location errors. Location awareness enables mobile nodes to predict their mobility and enhances routing performance by estimating link duration and selecting reliable routes. However, measured locations intrinsically include errors in measurement. Such errors degrade mobility prediction and have been ignored in previous work. To mitigate the impact of location errors on routing, we propose an on-demand routing algorithm taking into account location errors. To that end, we adopt the Kalman filter to estimate accurate locations and consider route confidence in discovering routes. Via simulations, we compare our algorithm and previous algorithms in various environments. Our proposed mobility prediction is robust to the location errors.

  18. Multi-link faults localization and restoration based on fuzzy fault set for dynamic optical networks.

    Science.gov (United States)

    Zhao, Yongli; Li, Xin; Li, Huadong; Wang, Xinbo; Zhang, Jie; Huang, Shanguo

    2013-01-28

    Based on a distributed method of bit-error-rate (BER) monitoring, a novel multi-link faults restoration algorithm is proposed for dynamic optical networks. The concept of fuzzy fault set (FFS) is first introduced for multi-link faults localization, which includes all possible optical equipment or fiber links with a membership describing the possibility of faults. Such a set is characterized by a membership function which assigns each object a grade of membership ranging from zero to one. OSPF protocol extension is designed for the BER information flooding in the network. The BER information can be correlated to link faults through FFS. Based on the BER information and FFS, multi-link faults localization mechanism and restoration algorithm are implemented and experimentally demonstrated on a GMPLS enabled optical network testbed with 40 wavelengths in each fiber link. Experimental results show that the novel localization mechanism has better performance compared with the extended limited perimeter vector matching (LVM) protocol and the restoration algorithm can improve the restoration success rate under multi-link faults scenario.

  19. Fault Diagnosis of Plunger Pump in Truck Crane Based on Relevance Vector Machine with Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Wenliao Du

    2013-01-01

    Full Text Available Promptly and accurately dealing with the equipment breakdown is very important in terms of enhancing reliability and decreasing downtime. A novel fault diagnosis method PSO-RVM based on relevance vector machines (RVM with particle swarm optimization (PSO algorithm for plunger pump in truck crane is proposed. The particle swarm optimization algorithm is utilized to determine the kernel width parameter of the kernel function in RVM, and the five two-class RVMs with binary tree architecture are trained to recognize the condition of mechanism. The proposed method is employed in the diagnosis of plunger pump in truck crane. The six states, including normal state, bearing inner race fault, bearing roller fault, plunger wear fault, thrust plate wear fault, and swash plate wear fault, are used to test the classification performance of the proposed PSO-RVM model, which compared with the classical models, such as back-propagation artificial neural network (BP-ANN, ant colony optimization artificial neural network (ANT-ANN, RVM, and support vectors, machines with particle swarm optimization (PSO-SVM, respectively. The experimental results show that the PSO-RVM is superior to the first three classical models, and has a comparative performance to the PSO-SVM, the corresponding diagnostic accuracy achieving as high as 99.17% and 99.58%, respectively. But the number of relevance vectors is far fewer than that of support vector, and the former is about 1/12–1/3 of the latter, which indicates that the proposed PSO-RVM model is more suitable for applications that require low complexity and real-time monitoring.

  20. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    Science.gov (United States)

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  1. Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks

    National Research Council Canada - National Science Library

    Sorgaard, Duane

    2004-01-01

    .... A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance...

  2. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2017-01-01

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  3. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi

    2017-12-14

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  4. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin

    2015-02-03

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  5. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin; Dutta, Rishabh; Jonsson, Sigurjon

    2015-01-01

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  6. Reconnaissance geophysics to locate major faults in clays

    International Nuclear Information System (INIS)

    Jackson, P.D.; Hallam, J.R.; Raines, M.G.; Rainsbury, M.P.; Greenwood, P.G.; Busby, J.P.

    1991-01-01

    Trial surveys using resistivity, seismic refraction and electromagnetic techniques have been carried out at two potential research sites on Jurassic clays. A previously unknown major fault has been detected at Down Ampney and mapped during the main survey to a precision of 5 to 10 metres by resistivity profiling using a Schlumberger array, optimized from pre-existing data and model studies. The fault is identified by a strong characteristic signature which results from the fault displacement and a zone of disturbance within the clay. This method is rapid, provides high resolution and permits immediate field interpretation

  7. Bearing Fault Detection Based on Maximum Likelihood Estimation and Optimized ANN Using the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Behrooz Attaran

    2015-01-01

    Full Text Available Rotating machinery is the most common machinery in industry. The root of the faults in rotating machinery is often faulty rolling element bearings. This paper presents a technique using optimized artificial neural network by the Bees Algorithm for automated diagnosis of localized faults in rolling element bearings. The inputs of this technique are a number of features (maximum likelihood estimation values, which are derived from the vibration signals of test data. The results shows that the performance of the proposed optimized system is better than most previous studies, even though it uses only two features. Effectiveness of the above method is illustrated using obtained bearing vibration data.

  8. Fault Detection and Isolation for Spacecraft

    DEFF Research Database (Denmark)

    Jensen, Hans-Christian Becker; Wisniewski, Rafal

    2002-01-01

    This article realizes nonlinear Fault Detection and Isolation for actuators, given there is no measurement of the states in the actuators. The Fault Detection and Isolation of the actuators is instead based on angular velocity measurement of the spacecraft and knowledge about the dynamics...... of the satellite. The algorithms presented in this paper are based on a geometric approach to achieve nonlinear Fault Detection and Isolation. The proposed algorithms are tested in a simulation study and the pros and cons of the algorithms are discussed....

  9. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    Science.gov (United States)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  10. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    Science.gov (United States)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  11. SENSORS FAULT DIAGNOSIS ALGORITHM DESIGN OF A HYDRAULIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Matej ORAVEC

    2017-06-01

    Full Text Available This article presents the sensors fault diagnosis system design for the hydraulic system, which is based on the group of the three fault estimation filters. These filters are used for estimation of the system states and sensors fault magnitude. Also, this article briefly stated the hydraulic system state control design with integrator, which is important assumption for the fault diagnosis system design. The sensors fault diagnosis system is implemented into the Matlab/Simulink environment and it is verified using the controlled hydraulic system simulation model. Verification of the designed fault diagnosis system is realized by series of experiments, which simulates sensors faults. The results of the experiments are briefly presented in the last part of this article.

  12. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  13. An Endosymbiotic Evolutionary Algorithm for the Hub Location-Routing Problem

    Directory of Open Access Journals (Sweden)

    Ji Ung Sun

    2015-01-01

    Full Text Available We consider a capacitated hub location-routing problem (HLRP which combines the hub location problem and multihub vehicle routing decisions. The HLRP not only determines the locations of the capacitated p-hubs within a set of potential hubs but also deals with the routes of the vehicles to meet the demands of customers. This problem is formulated as a 0-1 mixed integer programming model with the objective of the minimum total cost including routing cost, fixed hub cost, and fixed vehicle cost. As the HLRP has impractically demanding for the large sized problems, we develop a solution method based on the endosymbiotic evolutionary algorithm (EEA which solves hub location and vehicle routing problem simultaneously. The performance of the proposed algorithm is examined through a comparative study. The experimental results show that the proposed EEA can be a viable solution method for the supply chain network planning.

  14. Diagnosis and Tolerant Strategy of an Open-Switch Fault for T-type Three-Level Inverter Systems

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo Beum; Blaabjerg, Frede

    2014-01-01

    This paper proposes a new diagnosis method of an open-switch fault and fault-tolerant control strategy for T-type three-level inverter systems. The location of faulty switch can be identified by the average of normalized phase current and the change of the neutral-point voltage. The proposed fault......-tolerant strategy is explained by dividing into two cases: the faulty condition of half-bridge switches and the neutral-point switches. The performance of the T-type inverter system improves considerably by the proposed fault tolerant algorithm when a switch fails. The roposed method does not require additional...... components and complex calculations. Simulation and experimental results verify the feasibility of the proposed fault diagnosis and fault-tolerant control strategy....

  15. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  16. What does fault tolerant Deep Learning need from MPI?

    Energy Technology Data Exchange (ETDEWEB)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.; Daily, Jeffrey A.

    2017-09-25

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for a fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.

  17. Multi-Stage Feature Selection by Using Genetic Algorithms for Fault Diagnosis in Gearboxes Based on Vibration Signal

    Directory of Open Access Journals (Sweden)

    Mariela Cerrada

    2015-09-01

    Full Text Available There are growing demands for condition-based monitoring of gearboxes, and techniques to improve the reliability, effectiveness and accuracy for fault diagnosis are considered valuable contributions. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance in the diagnosis system. The main aim of this research is to propose a multi-stage feature selection mechanism for selecting the best set of condition parameters on the time, frequency and time-frequency domains, which are extracted from vibration signals for fault diagnosis purposes in gearboxes. The selection is based on genetic algorithms, proposing in each stage a new subset of the best features regarding the classifier performance in a supervised environment. The selected features are augmented at each stage and used as input for a neural network classifier in the next step, while a new subset of feature candidates is treated by the selection process. As a result, the inherent exploration and exploitation of the genetic algorithms for finding the best solutions of the selection problem are locally focused. The Sensors 2015, 15 23904 approach is tested on a dataset from a real test bed with several fault classes under different running conditions of load and velocity. The model performance for diagnosis is over 98%.

  18. S-velocity structure in Cimandiri fault zone derived from neighbourhood inversion of teleseismic receiver functions

    Science.gov (United States)

    Syuhada; Anggono, T.; Febriani, F.; Ramdhan, M.

    2018-03-01

    The availability information about realistic velocity earth model in the fault zone is crucial in order to quantify seismic hazard analysis, such as ground motion modelling, determination of earthquake locations and focal mechanism. In this report, we use teleseismic receiver function to invert the S-velocity model beneath a seismic station located in the Cimandiri fault zone using neighbourhood algorithm inversion method. The result suggests the crustal thickness beneath the station is about 32-38 km. Furthermore, low velocity layers with high Vp/Vs exists in the lower crust, which may indicate the presence of hot material ascending from the subducted slab.

  19. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  20. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  1. Brake fault diagnosis using Clonal Selection Classification Algorithm (CSCA – A statistical learning approach

    Directory of Open Access Journals (Sweden)

    R. Jegadeeshwaran

    2015-03-01

    Full Text Available In automobile, brake system is an essential part responsible for control of the vehicle. Any failure in the brake system impacts the vehicle's motion. It will generate frequent catastrophic effects on the vehicle cum passenger's safety. Thus the brake system plays a vital role in an automobile and hence condition monitoring of the brake system is essential. Vibration based condition monitoring using machine learning techniques are gaining momentum. This study is one such attempt to perform the condition monitoring of a hydraulic brake system through vibration analysis. In this research, the performance of a Clonal Selection Classification Algorithm (CSCA for brake fault diagnosis has been reported. A hydraulic brake system test rig was fabricated. Under good and faulty conditions of a brake system, the vibration signals were acquired using a piezoelectric transducer. The statistical parameters were extracted from the vibration signal. The best feature set was identified for classification using attribute evaluator. The selected features were then classified using CSCA. The classification accuracy of such artificial intelligence technique has been compared with other machine learning approaches and discussed. The Clonal Selection Classification Algorithm performs better and gives the maximum classification accuracy (96% for the fault diagnosis of a hydraulic brake system.

  2. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, M; Gu, F; Ball, A, E-mail: M.Ahmed@hud.ac.uk [Diagnostic Engineering Research Group, University of Huddersfield, HD1 3DH (United Kingdom)

    2011-07-19

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  3. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A

    2011-01-01

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  4. Detecting Faults in Southern California using Computer-Vision Techniques and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Interferometry

    Science.gov (United States)

    Barba, M.; Rains, C.; von Dassow, W.; Parker, J. W.; Glasscoe, M. T.

    2013-12-01

    Knowing the location and behavior of active faults is essential for earthquake hazard assessment and disaster response. In Interferometric Synthetic Aperture Radar (InSAR) images, faults are revealed as linear discontinuities. Currently, interferograms are manually inspected to locate faults. During the summer of 2013, the NASA-JPL DEVELOP California Disasters team contributed to the development of a method to expedite fault detection in California using remote-sensing technology. The team utilized InSAR images created from polarimetric L-band data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) project. A computer-vision technique known as 'edge-detection' was used to automate the fault-identification process. We tested and refined an edge-detection algorithm under development through NASA's Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) project. To optimize the algorithm we used both UAVSAR interferograms and synthetic interferograms generated through Disloc, a web-based modeling program available through NASA's QuakeSim project. The edge-detection algorithm detected seismic, aseismic, and co-seismic slip along faults that were identified and compared with databases of known fault systems. Our optimization process was the first step toward integration of the edge-detection code into E-DECIDER to provide decision support for earthquake preparation and disaster management. E-DECIDER partners that will use the edge-detection code include the California Earthquake Clearinghouse and the US Department of Homeland Security through delivery of products using the Unified Incident Command and Decision Support (UICDS) service. Through these partnerships, researchers, earthquake disaster response teams, and policy-makers will be able to use this new methodology to examine the details of ground and fault motions for moderate to large earthquakes. Following an earthquake, the newly discovered faults can

  5. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    Science.gov (United States)

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  6. A Location-Based Business Information Recommendation Algorithm

    Directory of Open Access Journals (Sweden)

    Shudong Liu

    2015-01-01

    Full Text Available Recently, many researches on information (e.g., POI, ADs recommendation based on location have been done in both research and industry. In this paper, we firstly construct a region-based location graph (RLG, in which region node respectively connects with user node and business information node, and then we propose a location-based recommendation algorithm based on RLG, which can combine with user short-ranged mobility formed by daily activity and long-distance mobility formed by social network ties and sequentially can recommend local business information and long-distance business information to users. Moreover, it can combine user-based collaborative filtering with item-based collaborative filtering, and it can alleviate cold start problem which traditional recommender systems often suffer from. Empirical studies from large-scale real-world data from Yelp demonstrate that our method outperforms other methods on the aspect of recommendation accuracy.

  7. Analysis of lightning fault detection, location and protection on short and long transmission lines using Real Time Digital Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Andre Luiz Pereira de [Siemens Ltda., Sao Paulo, SP (Brazil)], E-mail: andreluiz.oliveira@siemens.com

    2007-07-01

    The purpose of this paper is to present an analysis of lightning fault detection, location and protection using numeric distance relays applied in high voltage transmission lines, more specifically in the 500 kV transmission lines of CEMIG (Brazilian Energy Utility) between the Vespasiano 2 - Neves 1 (short line - 23.9 km) and Vespasiano 2 - Mesquita (long line - 148.6 km) substations. The analysis was based on the simulations results of numeric distance protective relays on power transmission lines, realized in September 02 to 06, 2002, at Siemens AG's facilities (Erlangen - Germany), using Real Time Digital Simulator (RTDS{sup TM}). Several lightning faults simulations were accomplished, in several conditions of the electrical power system where the protective relays would be installed. The results are presented not only with the times of lightning faults elimination, but also all the functionality of a protection system, including the correct detection, location and other advantages that these modern protection devices make possible to the power system. (author)

  8. Finite-Fault and Other New Capabilities of CISN ShakeAlert

    Science.gov (United States)

    Boese, M.; Felizardo, C.; Heaton, T. H.; Hudnut, K. W.; Hauksson, E.

    2013-12-01

    Over the past 6 years, scientists at Caltech, UC Berkeley, the Univ. of Southern California, the Univ. of Washington, the US Geological Survey, and ETH Zurich (Switzerland) have developed the 'ShakeAlert' earthquake early warning demonstration system for California and the Pacific Northwest. We have now started to transform this system into a stable end-to-end production system that will be integrated into the daily routine operations of the CISN and PNSN networks. To quickly determine the earthquake magnitude and location, ShakeAlert currently processes and interprets real-time data-streams from several hundred seismic stations within the California Integrated Seismic Network (CISN) and the Pacific Northwest Seismic Network (PNSN). Based on these parameters, the 'UserDisplay' software predicts and displays the arrival and intensity of shaking at a given user site. Real-time ShakeAlert feeds are currently being shared with around 160 individuals, companies, and emergency response organizations to gather feedback about the system performance, to educate potential users about EEW, and to identify needs and applications of EEW in a future operational warning system. To improve the performance during large earthquakes (M>6.5), we have started to develop, implement, and test a number of new algorithms for the ShakeAlert system: the 'FinDer' (Finite Fault Rupture Detector) algorithm provides real-time estimates of locations and extents of finite-fault ruptures from high-frequency seismic data. The 'GPSlip' algorithm estimates the fault slip along these ruptures using high-rate real-time GPS data. And, third, a new type of ground-motion prediction models derived from over 415,000 rupture simulations along active faults in southern California improves MMI intensity predictions for large earthquakes with consideration of finite-fault, rupture directivity, and basin response effects. FinDer and GPSlip are currently being real-time and offline tested in a separate internal

  9. A 3D modeling approach to complex faults with multi-source data

    Science.gov (United States)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  10. Accuracy of algorithms to predict accessory pathway location in children with Wolff-Parkinson-White syndrome.

    Science.gov (United States)

    Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric

    2012-02-01

    The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.

  11. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  12. Approximation algorithms for facility location problems with discrete subadditive cost functions

    NARCIS (Netherlands)

    Gabor, A.F.; van Ommeren, Jan C.W.

    2005-01-01

    In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present two facility location problems with stochastic demand and exponential servers, respectively inventory. We present a $(1+\\epsilon,1)$- reduction of the

  13. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  14. Decision making algorithms for hydro-power plant location

    CERN Document Server

    Majumder, Mrinmoy

    2013-01-01

    The present study has attempted to apply the advantage of neuro-genetic algorithms for optimal decision making in maximum utilization of natural resources. Hydro-power is one of the inexpensive, but a reliable source of alternative energy which is foreseen as the possible answer to the present crisis in the energy sector. However, the major problem related to hydro-energy is its dependency on location. An ideal location can produce maximum energy with minimum loss. Besides, such power-plant also requires substantial amount of land which is a precious resource nowadays due to the rapid and unco

  15. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

    International Nuclear Information System (INIS)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C.

    2004-01-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis

  16. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  17. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  18. Nonlinear observer based fault detection and isolation for a momentum wheel

    DEFF Research Database (Denmark)

    Jensen, Hans-Christian Becker; Wisniewski, Rafal

    2001-01-01

    This article realizes nonlinear Fault Detection and Isolation for a momentum wheel. The Fault Detection and Isolation is based on a Failure Mode and Effect Analysis, which states which faults might occur and can be detected. The algorithms presented in this paper are based on a geometric approach...... toachieve nonlinear Fault Detection and Isolation. The proposed algorithms are tested in a simulation study and the pros and cons of the algorithm are discussed....

  19. Fault Diagnosis and Fault Tolerant Control with Application on a Wind Turbine Low Speed Shaft Encoder

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Sardi, Hector Eloy Sanchez; Escobet, Teressa

    2015-01-01

    tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

  20. Rectifier Fault Diagnosis and Fault Tolerance of a Doubly Fed Brushless Starter Generator

    Directory of Open Access Journals (Sweden)

    Liwei Shi

    2015-01-01

    Full Text Available This paper presents a rectifier fault diagnosis method with wavelet packet analysis to improve the fault tolerant four-phase doubly fed brushless starter generator (DFBLSG system reliability. The system components and fault tolerant principle of the high reliable DFBLSG are given. And the common fault of the rectifier is analyzed. The process of wavelet packet transforms fault detection/identification algorithm is introduced in detail. The fault tolerant performance and output voltage experiments were done to gather the energy characteristics with a voltage sensor. The signal is analyzed with 5-layer wavelet packets, and the energy eigenvalue of each frequency band is obtained. Meanwhile, the energy-eigenvalue tolerance was introduced to improve the diagnostic accuracy. With the wavelet packet fault diagnosis, the fault tolerant four-phase DFBLSG can detect the usual open-circuit fault and operate in the fault tolerant mode if there is a fault. The results indicate that the fault analysis techniques in this paper are accurate and effective.

  1. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    Science.gov (United States)

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid

    2017-05-12

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  3. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid; Jardak, Seifallah; Alouini, Mohamed-Slim

    2017-01-01

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  4. Fault Diagnosis of Power Systems Using Intelligent Systems

    Science.gov (United States)

    Momoh, James A.; Oliver, Walter E. , Jr.

    1996-01-01

    The power system operator's need for a reliable power delivery system calls for a real-time or near-real-time Al-based fault diagnosis tool. Such a tool will allow NASA ground controllers to re-establish a normal or near-normal degraded operating state of the EPS (a DC power system) for Space Station Alpha by isolating the faulted branches and loads of the system. And after isolation, re-energizing those branches and loads that have been found not to have any faults in them. A proposed solution involves using the Fault Diagnosis Intelligent System (FDIS) to perform near-real time fault diagnosis of Alpha's EPS by downloading power transient telemetry at fault-time from onboard data loggers. The FDIS uses an ANN clustering algorithm augmented with a wavelet transform feature extractor. This combination enables this system to perform pattern recognition of the power transient signatures to diagnose the fault type and its location down to the orbital replaceable unit. FDIS has been tested using a simulation of the LeRC Testbed Space Station Freedom configuration including the topology from the DDCU's to the electrical loads attached to the TPDU's. FDIS will work in conjunction with the Power Management Load Scheduler to determine what the state of the system was at the time of the fault condition. This information is used to activate the appropriate diagnostic section, and to refine if necessary the solution obtained. In the latter case, if the FDIS reports back that it is equally likely that the faulty device as 'start tracker #1' and 'time generation unit,' then based on a priori knowledge of the system's state, the refined solution would be 'star tracker #1' located in cabinet ITAS2. It is concluded from the present studies that artificial intelligence diagnostic abilities are improved with the addition of the wavelet transform, and that when such a system such as FDIS is coupled to the Power Management Load Scheduler, a faulty device can be located and isolated

  5. A novel algorithm for discrimination between inrush current and internal faults in power transformer differential protection based on discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Eldin, A.A. Hossam; Refaey, M.A. [Electrical Engineering Department, Alexandria University, Alexandria (Egypt)

    2011-01-15

    This paper proposes a novel methodology for transformer differential protection, based on wave shape recognition of the discriminating criterion extracted of the instantaneous differential currents. Discrete wavelet transform has been applied to the differential currents due to internal fault and inrush currents. The diagnosis criterion is based on median absolute deviation (MAD) of wavelet coefficients over a specified frequency band. The proposed algorithm is examined using various simulated inrush and internal fault current cases on a power transformer that has been modeled using electromagnetic transients program EMTDC software. Results of evaluation study show that, proposed wavelet based differential protection scheme can discriminate internal faults from inrush currents. (author)

  6. Improved Tensor-Based Singular Spectrum Analysis Based on Single Channel Blind Source Separation Algorithm and Its Application to Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Dan Yang

    2017-04-01

    Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.

  7. Fault isolatability conditions for linear systems

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Henrik

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...... the faults have occurred. The last step is a fault isolation (FI) of the faults occurring in a specific fault set, i.e. equivalent with the standard FI step. A simple example demonstrates how to turn the algebraic necessary and sufficient conditions into explicit algorithms for designing filter banks, which...

  8. An improved fault detection classification and location scheme based on wavelet transform and artificial neural network for six phase transmission line using single end data only.

    Science.gov (United States)

    Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit

    2015-01-01

    Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.

  9. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    Science.gov (United States)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  10. Iowa Bedrock Faults

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This fault coverage locates and identifies all currently known/interpreted fault zones in Iowa, that demonstrate offset of geologic units in exposure or subsurface...

  11. Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System

    Science.gov (United States)

    Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.

    2006-01-01

    The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

  12. Location and Position Determination Algorithm For Humanoid Soccer Robot

    Directory of Open Access Journals (Sweden)

    Oei Kurniawan Utomo

    2016-03-01

    Full Text Available The algorithm of location and position determination was designed for humanoid soccer robot. The robots have to be able to control the ball effectively on the field of Indonesian Robot Soccer Competition which has a size of 900 cm x 600 cm. The algorithm of location and position determination uses parameters, such as the goalpost’s thickness, the compass value, and the robot’s head servo value. The goalpost’s thickness is detected using The Centre of Gravity method. The width of the goalpost detected is analyzed using the principles of camera geometry to determine the distance between the robot and the goalpost. The tangent value of head servo’s tilt angle is used to determine the distance between the robot and the ball. The distance between robot-goalpost and the distance between robot-ball are processed with the difference of head servo’s pan angle and compass value using trigonometric formulas to determine the coordinates of the robot and the ball in the Cartesian coordinates.

  13. Data-driven design of fault diagnosis and fault-tolerant control systems

    CERN Document Server

    Ding, Steven X

    2014-01-01

    Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems presents basic statistical process monitoring, fault diagnosis, and control methods, and introduces advanced data-driven schemes for the design of fault diagnosis and fault-tolerant control systems catering to the needs of dynamic industrial processes. With ever increasing demands for reliability, availability and safety in technical processes and assets, process monitoring and fault-tolerance have become important issues surrounding the design of automatic control systems. This text shows the reader how, thanks to the rapid development of information technology, key techniques of data-driven and statistical process monitoring and control can now become widely used in industrial practice to address these issues. To allow for self-contained study and facilitate implementation in real applications, important mathematical and control theoretical knowledge and tools are included in this book. Major schemes are presented in algorithm form and...

  14. Fault prediction for nonlinear stochastic system with incipient faults based on particle filter and nonlinear regression.

    Science.gov (United States)

    Ding, Bo; Fang, Huajing

    2017-05-01

    This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. An Immune Cooperative Particle Swarm Optimization Algorithm for Fault-Tolerant Routing Optimization in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yifan Hu

    2012-01-01

    Full Text Available The fault-tolerant routing problem is important consideration in the design of heterogeneous wireless sensor networks (H-WSNs applications, and has recently been attracting growing research interests. In order to maintain k disjoint communication paths from source sensors to the macronodes, we present a hybrid routing scheme and model, in which multiple paths are calculated and maintained in advance, and alternate paths are created once the previous routing is broken. Then, we propose an immune cooperative particle swarm optimization algorithm (ICPSOA in the model to provide the fast routing recovery and reconstruct the network topology for path failure in H-WSNs. In the ICPSOA, mutation direction of the particle is determined by multi-swarm evolution equation, and its diversity is improved by immune mechanism, which can enhance the capacity of global search and improve the converging rate of the algorithm. Then we validate this theoretical model with simulation results. The results indicate that the ICPSOA-based fault-tolerant routing protocol outperforms several other protocols due to its capability of fast routing recovery mechanism, reliable communications, and prolonging the lifetime of WSNs.

  16. 40 CFR 258.13 - Fault areas.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in Holocene...

  17. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  18. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2018-01-01

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a

  19. A Survey of Wireless Fair Queuing Algorithms with Location-Dependent Channel Errors

    Directory of Open Access Journals (Sweden)

    Anca VARGATU

    2011-01-01

    Full Text Available The rapid development of wireless networks has brought more and more attention to topics related to fair allocation of resources, creation of suitable algorithms, taking into account the special characteristics of wireless environment and insurance fair access to the transmission channel, with delay bound and throughput guaranteed. Fair allocation of resources in wireless networks requires significant challenges, because of errors that occur only in these networks, such as location-dependent and bursty channel errors. In wireless networks, frequently happens, because interference of radio waves, that a user experiencing bad radio conditions during a period of time, not to receive resources in that period. This paper analyzes some resource allocation algorithms for wireless networks with location dependent errors, specifying the base idea for each algorithm and the way how it works. The analyzed fair queuing algorithms differ by the way they treat the following aspects: how to select the flows which should receive additional services, how to allocate these resources, which is the proportion received by error free flows and how the flows affected by errors are compensated.

  20. Constraining fault interpretation through tomographic velocity gradients: application to northern Cascadia

    Directory of Open Access Journals (Sweden)

    K. Ramachandran

    2012-02-01

    Full Text Available Spatial gradients of tomographic velocities are seldom used in interpretation of subsurface fault structures. This study shows that spatial velocity gradients can be used effectively in identifying subsurface discontinuities in the horizontal and vertical directions. Three-dimensional velocity models constructed through tomographic inversion of active source and/or earthquake traveltime data are generally built from an initial 1-D velocity model that varies only with depth. Regularized tomographic inversion algorithms impose constraints on the roughness of the model that help to stabilize the inversion process. Final velocity models obtained from regularized tomographic inversions have smooth three-dimensional structures that are required by the data. Final velocity models are usually analyzed and interpreted either as a perturbation velocity model or as an absolute velocity model. Compared to perturbation velocity model, absolute velocity models have an advantage of providing constraints on lithology. Both velocity models lack the ability to provide sharp constraints on subsurface faults. An interpretational approach utilizing spatial velocity gradients applied to northern Cascadia shows that subsurface faults that are not clearly interpretable from velocity model plots can be identified by sharp contrasts in velocity gradient plots. This interpretation resulted in inferring the locations of the Tacoma, Seattle, Southern Whidbey Island, and Darrington Devil's Mountain faults much more clearly. The Coast Range Boundary fault, previously hypothesized on the basis of sedimentological and tectonic observations, is inferred clearly from the gradient plots. Many of the fault locations imaged from gradient data correlate with earthquake hypocenters, indicating their seismogenic nature.

  1. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  2. Partial discharge location technique for covered-conductor overhead distribution lines

    Energy Technology Data Exchange (ETDEWEB)

    Isa, M.

    2013-02-01

    In Finland, covered-conductor (CC) overhead lines are commonly used in medium voltage (MV) networks because the loads are widely distributed in the forested terrain. Such parts of the network are exposed to leaning trees which produce partial discharges (PDs) in CC lines. This thesis presents a technique to locate the PD source on CC overhead distribution line networks. The algorithm is developed and tested using a simulated study and experimental measurements. The Electromagnetic Transient Program-Alternative Transient Program (EMTP-ATP) is used to simulate and analyze a three-phase PD monitoring system, while MATLAB is used for post-processing of the high frequency signals which were measured. A Rogowski coil is used as the measuring sensor. A multi-end correlation-based technique for PD location is implemented using the theory of maximum correlation factor in order to find the time difference of arrival (TDOA) between signal arrivals at three synchronized measuring points. The three stages of signal analysis used are: (1) denoising by applying discrete wavelet transform (DWT); (2) extracting the PD features using the absolute or windowed standard deviation (STD) and; (3) locating the PD point. The advantage of this technique is the ability to locate the PD source without the need to know the first arrival time and the propagation velocity of the signals. In addition, the faulty section of the CC line between three measuring points can also be identified based on the degrees of correlation. An experimental analysis is performed to evaluate the PD measurement system performance for PD location on CC overhead lines. The measuring set-up is arranged in a high voltage (HV) laboratory. A multi-end measuring method is chosen as a technique to locate the PD source point on the line. A power transformer 110/20 kV was used to energize the AC voltage up to 11.5 kV/phase (20 kV system). The tests were designed to cover different conditions such as offline and online

  3. Fault Modeling and Testing for Analog Circuits in Complex Space Based on Supply Current and Output Voltage

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available This paper deals with the modeling of fault for analog circuits. A two-dimensional (2D fault model is first proposed based on collaborative analysis of supply current and output voltage. This model is a family of circle loci on the complex plane, and it simplifies greatly the algorithms for test point selection and potential fault simulations, which are primary difficulties in fault diagnosis of analog circuits. Furthermore, in order to reduce the difficulty of fault location, an improved fault model in three-dimensional (3D complex space is proposed, which achieves a far better fault detection ratio (FDR against measurement error and parametric tolerance. To address the problem of fault masking in both 2D and 3D fault models, this paper proposes an effective design for testability (DFT method. By adding redundant bypassing-components in the circuit under test (CUT, this method achieves excellent fault isolation ratio (FIR in ambiguity group isolation. The efficacy of the proposed model and testing method is validated through experimental results provided in this paper.

  4. Monitoring microearthquakes with the San Andreas fault observatory at depth

    Science.gov (United States)

    Oye, V.; Ellsworth, W.L.

    2007-01-01

    In 2005, the San Andreas Fault Observatory at Depth (SAFOD) was drilled through the San Andreas Fault zone at a depth of about 3.1 km. The borehole has subsequently been instrumented with high-frequency geophones in order to better constrain locations and source processes of nearby microearthquakes that will be targeted in the upcoming phase of SAFOD. The microseismic monitoring software MIMO, developed by NORSAR, has been installed at SAFOD to provide near-real time locations and magnitude estimates using the high sampling rate (4000 Hz) waveform data. To improve the detection and location accuracy, we incorporate data from the nearby, shallow borehole (???250 m) seismometers of the High Resolution Seismic Network (HRSN). The event association algorithm of the MIMO software incorporates HRSN detections provided by the USGS real time earthworm software. The concept of the new event association is based on the generalized beam forming, primarily used in array seismology. The method requires the pre-computation of theoretical travel times in a 3D grid of potential microearthquake locations to the seismometers of the current station network. By minimizing the differences between theoretical and observed detection times an event is associated and the location accuracy is significantly improved.

  5. Active fault traces along Bhuj Fault and Katrol Hill Fault, and ...

    Indian Academy of Sciences (India)

    face, passing through the alluvial-colluvial fan at location 2. The gentle warping of the surface was completely modified because of severe cultivation practice. Therefore, it was difficult to confirm it in field. To the south ... scarp has been modified by present day farming. At location 5 near Wandhay village, an active fault trace ...

  6. Product quality management based on CNC machine fault prognostics and diagnosis

    Science.gov (United States)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  7. Multiple resolution chirp reflectometry for fault localization and diagnosis in a high voltage cable in automotive electronics

    Science.gov (United States)

    Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae

    2016-12-01

    A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.

  8. Using the time domain reflectometer to check for a locate a fault

    International Nuclear Information System (INIS)

    Ramphal, M.; Sadok, E.

    1995-01-01

    The Time Domain Reflectometer (TDR) is one of the most useful tools for finding cable faults (opens, shorts, bad cable splices). The TDR is connected to the end of the line and shows the distance to the fault. It uses a low voltage signal that will not damage the line or interfere with nearby lines. The TDR sends a pulse or energy down the cable under test; when the pulse encounters the end of the cable or any cable fault, a portion of the pulse energy is reflected. The elapsed time of the reflected pulse is and indication of the distance to the fault. The shape of the reflected pulse uniquely identifies the type of cable fault. (author)

  9. A Method for Aileron Actuator Fault Diagnosis Based on PCA and PGC-SVM

    Directory of Open Access Journals (Sweden)

    Wei-Li Qin

    2016-01-01

    Full Text Available Aileron actuators are pivotal components for aircraft flight control system. Thus, the fault diagnosis of aileron actuators is vital in the enhancement of the reliability and fault tolerant capability. This paper presents an aileron actuator fault diagnosis approach combining principal component analysis (PCA, grid search (GS, 10-fold cross validation (CV, and one-versus-one support vector machine (SVM. This method is referred to as PGC-SVM and utilizes the direct drive valve input, force motor current, and displacement feedback signal to realize fault detection and location. First, several common faults of aileron actuators, which include force motor coil break, sensor coil break, cylinder leakage, and amplifier gain reduction, are extracted from the fault quadrantal diagram; the corresponding fault mechanisms are analyzed. Second, the data feature extraction is performed with dimension reduction using PCA. Finally, the GS and CV algorithms are employed to train a one-versus-one SVM for fault classification, thus obtaining the optimal model parameters and assuring the generalization of the trained SVM, respectively. To verify the effectiveness of the proposed approach, four types of faults are introduced into the simulation model established by AMESim and Simulink. The results demonstrate its desirable diagnostic performance which outperforms that of the traditional SVM by comparison.

  10. Fault Analysis in Cryptography

    CERN Document Server

    Joye, Marc

    2012-01-01

    In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

  11. Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs

    Directory of Open Access Journals (Sweden)

    Enrico Creaco

    2018-04-01

    Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.

  12. Staged-Fault Testing of Distance Protection Relay Settings

    Science.gov (United States)

    Havelka, J.; Malarić, R.; Frlan, K.

    2012-01-01

    In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.

  13. A novel vibration-based fault diagnostic algorithm for gearboxes under speed fluctuations without rotational speed measurement

    Science.gov (United States)

    Hong, Liu; Qu, Yongzhi; Dhupia, Jaspreet Singh; Sheng, Shuangwen; Tan, Yuegang; Zhou, Zude

    2017-09-01

    The localized failures of gears introduce cyclic-transient impulses in the measured gearbox vibration signals. These impulses are usually identified from the sidebands around gear-mesh harmonics through the spectral analysis of cyclo-stationary signals. However, in practice, several high-powered applications of gearboxes like wind turbines are intrinsically characterized by nonstationary processes that blur the measured vibration spectra of a gearbox and deteriorate the efficacy of spectral diagnostic methods. Although order-tracking techniques have been proposed to improve the performance of spectral diagnosis for nonstationary signals measured in such applications, the required hardware for the measurement of rotational speed of these machines is often unavailable in industrial settings. Moreover, existing tacho-less order-tracking approaches are usually limited by the high time-frequency resolution requirement, which is a prerequisite for the precise estimation of the instantaneous frequency. To address such issues, a novel fault-signature enhancement algorithm is proposed that can alleviate the spectral smearing without the need of rotational speed measurement. This proposed tacho-less diagnostic technique resamples the measured acceleration signal of the gearbox based on the optimal warping path evaluated from the fast dynamic time-warping algorithm, which aligns a filtered shaft rotational harmonic signal with respect to a reference signal assuming a constant shaft rotational speed estimated from the approximation of operational speed. The effectiveness of this method is validated using both simulated signals from a fixed-axis gear pair under nonstationary conditions and experimental measurements from a 750-kW planetary wind turbine gearbox on a dynamometer test rig. The results demonstrate that the proposed algorithm can identify fault information from typical gearbox vibration measurements carried out in a resource-constrained industrial environment.

  14. Active Fault Detection and Isolation for Hybrid Systems

    DEFF Research Database (Denmark)

    Gholami, Mehdi; Schiøler, Henrik; Bak, Thomas

    2009-01-01

    An algorithm for active fault detection and isolation is proposed. In order to observe the failure hidden due to the normal operation of the controllers or the systems, an optimization problem based on minimization of test signal is used. The optimization based method imposes the normal and faulty...... models predicted outputs such that their discrepancies are observable by passive fault diagnosis technique. Isolation of different faults is done by implementation a bank of Extended Kalman Filter (EKF) where the convergence criterion for EKF is confirmed by Genetic Algorithm (GA). The method is applied...

  15. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    Science.gov (United States)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  16. Fault-ignorant quantum search

    International Nuclear Information System (INIS)

    Vrana, Péter; Reeb, David; Reitzner, Daniel; Wolf, Michael M

    2014-01-01

    We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm. (paper)

  17. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information.

    Science.gov (United States)

    Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

    2010-06-18

    Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to

  18. Nonlinear Model-Based Fault Detection for a Hydraulic Actuator

    NARCIS (Netherlands)

    Van Eykeren, L.; Chu, Q.P.

    2011-01-01

    This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost

  19. Determining on-fault magnitude distributions for a connected, multi-fault system

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2017-12-01

    A new method is developed to determine on-fault magnitude distributions within a complex and connected multi-fault system. A binary integer programming (BIP) method is used to distribute earthquakes from a 10 kyr synthetic regional catalog, with a minimum magnitude threshold of 6.0 and Gutenberg-Richter (G-R) parameters (a- and b-values) estimated from historical data. Each earthquake in the synthetic catalog can occur on any fault and at any location. In the multi-fault system, earthquake ruptures are allowed to branch or jump from one fault to another. The objective is to minimize the slip-rate misfit relative to target slip rates for each of the faults in the system. Maximum and minimum slip-rate estimates around the target slip rate are used as explicit constraints. An implicit constraint is that an earthquake can only be located on a fault (or series of connected faults) if it is long enough to contain that earthquake. The method is demonstrated in the San Francisco Bay area, using UCERF3 faults and slip-rates. We also invoke the same assumptions regarding background seismicity, coupling, and fault connectivity as in UCERF3. Using the preferred regional G-R a-value, which may be suppressed by the 1906 earthquake, the BIP problem is deemed infeasible when faults are not connected. Using connected faults, however, a solution is found in which there is a surprising diversity of magnitude distributions among faults. In particular, the optimal magnitude distribution for earthquakes that participate along the Peninsula section of the San Andreas fault indicates a deficit of magnitudes in the M6.0- 7.0 range. For the Rodgers Creek-Hayward fault combination, there is a deficit in the M6.0- 6.6 range. Rather than solving this as an optimization problem, we can set the objective function to zero and solve this as a constraint problem. Among the solutions to the constraint problem is one that admits many more earthquakes in the deficit magnitude ranges for both faults

  20. Methodology for locating faults in the Eastern distribution system PDVSA, Punta de Mata and Furrial Divisions; Metodologia para la localizacion de fallas en el sistema de distribucion de PDVSA Oriente, Divisiones Punta de Mata y Furrial

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, F [Universidad Nacional Experimental Politecnica, Antonio Jose de Sucre, Guayana, Bolivar (Venezuela)]. E-mail: fco_martinez@outlook.com; Vasquez, C [Petroleos de Venezuela S.A., Maturin, Monagas (Venezuela)]. E-mail: vasquezcp@pdvsa.com

    2013-03-15

    Fault location in distribution systems has received a lot of attention in recent years in order to increase the availability of electricity supply. Due to the characteristics of distribution networks, fault location is a complicated task, so methods have been developed based on the variation of current and voltage values measured at the source substation, in normal operating condition and under the occurrence of short circuits. This article presents the implementation in MATLAB of a fault location algorithm applied to distribution systems, based on graphical analysis of the fault reactance which is determined by the minimum value of the reactance, using serial impedance matrix and fault/prefault voltage and current metering. Developed Tool Accuracy was verified by comparing the results obtained through it with actual recorded event data (Multilin SR 760) and distance to a known failure point. Additionally the method was applied to an experimental case that was compared with network fault simulation using ETAP Software. For both evaluated cases, the absolute error did not exceed 7%. [Spanish] La localizacion de fallas en sistemas de distribucion ha recibido atencion en los ultimos anos con el fin de aumentar la disponibilidad del suministro de energia electrica. Debido a las caracteristicas propias de las redes de distribucion, la ubicacion de fallas resulta una tarea complicada, por lo que se han desarrollado metodos basados en la variacion de los valores de corriente y voltaje medidos en la subestacion fuente, en condicion normal de operacion y ante la ocurrencia de cortocircuitos. Este articulo presenta la implementacion en MATLAB de un algoritmo de localizacion de fallas en sistemas de distribucion que se fundamenta en el analisis grafico de la reactancia de falla, mediante el cual se determina el minimo valor de la reactancia, utilizando la matriz de impedancia serie y la medicion de los voltajes y corrientes de prefalla y falla. Se verifico la precision de la

  1. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    Science.gov (United States)

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  2. Layered clustering multi-fault diagnosis for hydraulic piston pump

    Science.gov (United States)

    Du, Jun; Wang, Shaoping; Zhang, Haiyan

    2013-04-01

    Efficient diagnosis is very important for improving reliability and performance of aircraft hydraulic piston pump, and it is one of the key technologies in prognostic and health management system. In practice, due to harsh working environment and heavy working loads, multiple faults of an aircraft hydraulic pump may occur simultaneously after long time operations. However, most existing diagnosis methods can only distinguish pump faults that occur individually. Therefore, new method needs to be developed to realize effective diagnosis of simultaneous multiple faults on aircraft hydraulic pump. In this paper, a new method based on the layered clustering algorithm is proposed to diagnose multiple faults of an aircraft hydraulic pump that occur simultaneously. The intensive failure mechanism analyses of the five main types of faults are carried out, and based on these analyses the optimal combination and layout of diagnostic sensors is attained. The three layered diagnosis reasoning engine is designed according to the faults' risk priority number and the characteristics of different fault feature extraction methods. The most serious failures are first distinguished with the individual signal processing. To the desultory faults, i.e., swash plate eccentricity and incremental clearance increases between piston and slipper, the clustering diagnosis algorithm based on the statistical average relative power difference (ARPD) is proposed. By effectively enhancing the fault features of these two faults, the ARPDs calculated from vibration signals are employed to complete the hypothesis testing. The ARPDs of the different faults follow different probability distributions. Compared with the classical fast Fourier transform-based spectrum diagnosis method, the experimental results demonstrate that the proposed algorithm can diagnose the multiple faults, which occur synchronously, with higher precision and reliability.

  3. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    Science.gov (United States)

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  4. Locating non-volcanic tremor along the San Andreas Fault using a multiple array source imaging technique

    Science.gov (United States)

    Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.

    2010-01-01

    Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.

  5. Rupture preparation process controlled by surface roughness on meter-scale laboratory fault

    Science.gov (United States)

    Yamashita, Futoshi; Fukuyama, Eiichi; Xu, Shiqing; Mizoguchi, Kazuo; Kawakata, Hironori; Takizawa, Shigeru

    2018-05-01

    We investigate the effect of fault surface roughness on rupture preparation characteristics using meter-scale metagabbro specimens. We repeatedly conducted the experiments with the same pair of rock specimens to make the fault surface rough. We obtained three experimental results under the same experimental conditions (6.7 MPa of normal stress and 0.01 mm/s of loading rate) but at different roughness conditions (smooth, moderately roughened, and heavily roughened). During each experiment, we observed many stick-slip events preceded by precursory slow slip. We investigated when and where slow slip initiated by using the strain gauge data processed by the Kalman filter algorithm. The observed rupture preparation processes on the smooth fault (i.e. the first experiment among the three) showed high repeatability of the spatiotemporal distributions of slow slip initiation. Local stress measurements revealed that slow slip initiated around the region where the ratio of shear to normal stress (τ/σ) was the highest as expected from finite element method (FEM) modeling. However, the exact location of slow slip initiation was where τ/σ became locally minimum, probably due to the frictional heterogeneity. In the experiment on the moderately roughened fault, some irregular events were observed, though the basic characteristics of other regular events were similar to those on the smooth fault. Local stress data revealed that the spatiotemporal characteristics of slow slip initiation and the resulting τ/σ drop for irregular events were different from those for regular ones even under similar stress conditions. On the heavily roughened fault, the location of slow slip initiation was not consistent with τ/σ anymore because of the highly heterogeneous static friction on the fault, which also decreased the repeatability of spatiotemporal distributions of slow slip initiation. These results suggest that fault surface roughness strongly controls the rupture preparation process

  6. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    Science.gov (United States)

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  7. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  8. A Comparison of Hybrid Approaches for Turbofan Engine Gas Path Fault Diagnosis

    Science.gov (United States)

    Lu, Feng; Wang, Yafan; Huang, Jinquan; Wang, Qihang

    2016-09-01

    A hybrid diagnostic method utilizing Extended Kalman Filter (EKF) and Adaptive Genetic Algorithm (AGA) is presented for performance degradation estimation and sensor anomaly detection of turbofan engine. The EKF is used to estimate engine component performance degradation for gas path fault diagnosis. The AGA is introduced in the integrated architecture and applied for sensor bias detection. The contributions of this work are the comparisons of Kalman Filters (KF)-AGA algorithms and Neural Networks (NN)-AGA algorithms with a unified framework for gas path fault diagnosis. The NN needs to be trained off-line with a large number of prior fault mode data. When new fault mode occurs, estimation accuracy by the NN evidently decreases. However, the application of the Linearized Kalman Filter (LKF) and EKF will not be restricted in such case. The crossover factor and the mutation factor are adapted to the fitness function at each generation in the AGA, and it consumes less time to search for the optimal sensor bias value compared to the Genetic Algorithm (GA). In a word, we conclude that the hybrid EKF-AGA algorithm is the best choice for gas path fault diagnosis of turbofan engine among the algorithms discussed.

  9. Geophysical Imaging of Fault Structures Over the Qadimah Fault, Saudi Arabia

    KAUST Repository

    AlTawash, Feras

    2011-06-01

    The purpose of this study is to use geophysical imaging methods to identify the conjectured location of the ‘Qadimah fault’ near the ‘King Abdullah Economic City’, Saudi Arabia. Towards this goal, 2-D resistivity and seismic surveys were conducted at two different locations, site 1 and site 2, along the proposed trace of the ‘Qadimah fault’. Three processing techniques were used to validate the fault (i) 2-D travel time tomography, (ii) resistivity imaging, and (iii) reflection trim stacking. The refraction traveltime tomograms at site 1 and site 2 both show low-velocity zones (LVZ’s) next to the conjectured fault trace. These LVZ’s are interpreted as colluvial wedges that are often observed on the downthrown side of normal faults. The resistivity tomograms are consistent with this interpretation in that there is a significant change in resistivity values along the conjectured fault trace. Processing the reflection data did not clearly reveal the existence of a fault, and is partly due to the sub-optimal design of the reflection experiment. Overall, the results of this study strongly, but not definitively, suggest the existence of the Qadimah fault in the ‘King Abdullah Economic City’ region of Saudi Arabia.

  10. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    Science.gov (United States)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  11. Is the Multigrid Method Fault Tolerant? The Two-Grid Case

    Energy Technology Data Exchange (ETDEWEB)

    Ainsworth, Mark [Brown Univ., Providence, RI (United States). Division of Applied Mathematics; Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division; Glusa, Christian [Brown Univ., Providence, RI (United States). Division of Applied Mathematics

    2016-06-30

    The predicted reduced resiliency of next-generation high performance computers means that it will become necessary to take into account the effects of randomly occurring faults on numerical methods. Further, in the event of a hard fault occurring, a decision has to be made as to what remedial action should be taken in order to resume the execution of the algorithm. The action that is chosen can have a dramatic effect on the performance and characteristics of the scheme. Ideally, the resulting algorithm should be subjected to the same kind of mathematical analysis that was applied to the original, deterministic variant. The purpose of this work is to provide an analysis of the behaviour of the multigrid algorithm in the presence of faults. Multigrid is arguably the method of choice for the solution of large-scale linear algebra problems arising from discretization of partial differential equations and it is of considerable importance to anticipate its behaviour on an exascale machine. The analysis of resilience of algorithms is in its infancy and the current work is perhaps the first to provide a mathematical model for faults and analyse the behaviour of a state-of-the-art algorithm under the model. It is shown that the Two Grid Method fails to be resilient to faults. Attention is then turned to identifying the minimal necessary remedial action required to restore the rate of convergence to that enjoyed by the ideal fault-free method.

  12. Model based Fault Detection and Isolation for Driving Motors of a Ground Vehicle

    Directory of Open Access Journals (Sweden)

    Young-Joon Kim

    2016-04-01

    Full Text Available This paper proposes model based current sensor and position sensor fault detection and isolation algorithm for driving motor of In-wheel independent drive electric vehicle. From low level perspective, fault diagnosis conducted and analyzed to enhance robustness and stability. Composing state equation of interior permanent magnet synchronous motor (IPMSM, current sensor fault and position sensor fault diagnosed with parity equation. Validation and usefulness of algorithm confirmed based on IPMSM fault occurrence simulation data.

  13. The application of soil-gas geochemistry to precisely locate La Victoria fault near Paracotos (Venezuela)

    International Nuclear Information System (INIS)

    LaBrecque, J.J.; Rosales, P.A.; Cordoves, P.R.

    1999-01-01

    Full text: Measurements of radon (total radon, Radon-222 and Radon-220) and other soil-gases (CO 2 , O 2 and H 2 ) were performed routinely during 1998 and 1999 across a narrow valley near Paracotos, Venezuela in an attempt to precisely locate the La Victoria fault. The transect was about 300 meters long with eleven sampling points. The soil-gas probes were inserted to a depth of 45 cm in the beginning and later on completely to a depth of 63 cm. The radon sampling and measurements were accomplished with a Pylon AB-5 radiation monitor and Lucas scintillation cells. The other soil-gases were directly determined with an Anagas, CD95 monitor and an Infra-red Gas Analyzer (MKIIC) both coupled with a Hydrogen pod. The radon values for more than twenty different sampling periods over a two year period resulted with anomalous values between 75 and 150 meters along the transect. There were three consecutive anomalous values each time. But strangely, the anomalies of the radon values were in the form of a doublet at 116 and 141 meters rather than a simple single peak in the middle and the gas flow was similar for the sampling points between 75 and 150 meters. The graph of the relative CO 2 values were usually similar to the radon graphs but in some cases, the anomalous values were seen as a simple single peak and corresponded to the 141 meter sampling point. While the anomalous values of Hydrogen were usually in a form of a single peak that corresponded with 141 meter sampling point. Only a few times were values for Hydrogen higher than 100 ppm and detected at most of the sampling points, usually only one or two points resulted with small values near the 141 meter sampling point. Based on the radon values alone, we would have to conclude that the fault probably lies near or between the 116 and 141 meter sampling points, but with the additional data of the CO 2 and H 2 soil-gases one could say that the fault is probably near the 141 meter sampling point. Thus, we have

  14. Development of a Fault Monitoring Technique for Wind Turbines Using a Hidden Markov Model.

    Science.gov (United States)

    Shin, Sung-Hwan; Kim, SangRyul; Seo, Yun-Ho

    2018-06-02

    Regular inspection for the maintenance of the wind turbines is difficult because of their remote locations. For this reason, condition monitoring systems (CMSs) are typically installed to monitor their health condition. The purpose of this study is to propose a fault detection algorithm for the mechanical parts of the wind turbine. To this end, long-term vibration data were collected over two years by a CMS installed on a 3 MW wind turbine. The vibration distribution at a specific rotating speed of main shaft is approximated by the Weibull distribution and its cumulative distribution function is utilized for determining the threshold levels that indicate impending failure of mechanical parts. A Hidden Markov model (HMM) is employed to propose the statistical fault detection algorithm in the time domain and the method whereby the input sequence for HMM is extracted is also introduced by considering the threshold levels and the correlation between the signals. Finally, it was demonstrated that the proposed HMM algorithm achieved a greater than 95% detection success rate by using the long-term signals.

  15. Application of support vector machine based on pattern spectrum entropy in fault diagnostics of rolling element bearings

    International Nuclear Information System (INIS)

    Hao, Rujiang; Chu, Fulei; Peng, Zhike; Feng, Zhipeng

    2011-01-01

    This paper presents a novel pattern classification approach for the fault diagnostics of rolling element bearings, which combines the morphological multi-scale analysis and the 'one to others' support vector machine (SVM) classifiers. The morphological pattern spectrum describes the shape characteristics of the inspected signal based on the morphological opening operation with multi-scale structuring elements. The pattern spectrum entropy and the barycenter scale location of the spectrum curve are extracted as the feature vectors presenting different faults of the bearing, which are more effective and representative than the kurtosis and the enveloping demodulation spectrum. The 'one to others' SVM algorithm is adopted to distinguish six kinds of fault signals which were measured in the experimental test rig under eight different working conditions. The recognition results of the SVM are ideal and more precise than those of the artificial neural network even though the training samples are few. The combination of the morphological pattern spectrum parameters and the 'one to others' multi-class SVM algorithm is suitable for the on-line automated fault diagnosis of the rolling element bearings. This application is promising and worth well exploiting

  16. A Review Of Fault Tolerant Scheduling In Multicore Systems

    Directory of Open Access Journals (Sweden)

    Shefali Malhotra

    2015-05-01

    Full Text Available Abstract In this paper we have discussed about various fault tolerant task scheduling algorithm for multi core system based on hardware and software. Hardware based algorithm which is blend of Triple Modulo Redundancy and Double Modulo Redundancy in which Agricultural Vulnerability Factor is considered while deciding the scheduling other than EDF and LLF scheduling algorithms. In most of the real time system the dominant part is shared memory.Low overhead software based fault tolerance approach can be implemented at user-space level so that it does not require any changes at application level. Here redundant multi-threaded processes are used. Using those processes we can detect soft errors and recover from them. This method gives low overhead fast error detection and recovery mechanism. The overhead incurred by this method ranges from 0 to 18 for selected benchmarks. Hybrid Scheduling Method is another scheduling approach for real time systems. Dynamic fault tolerant scheduling gives high feasibility rate whereas task criticality is used to select the type of fault recovery method in order to tolerate the maximum number of faults.

  17. Recording real case data of earth faults in distribution lines

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S. [VTT Energy, Espoo (Finland)

    1996-12-31

    The most common fault type in the electrical distribution networks is the single phase to earth fault. According to the earlier studies, for instance in Nordic countries, about 80 % of all faults are of this type. To develop the protection and fault location systems, it is important to obtain real case data of disturbances and faults which occur in the networks. For example, the earth fault initial transients can be used for earth fault location. The aim of this project was to collect and analyze real case data of the earth fault disturbances in the medium voltage distribution networks (20 kV). Therefore, data of fault occurrences were recorded at two substations, of which one has an unearthed and the other a compensated neutral, measured as follows: (a) the phase currents and neutral current for each line in the case of low fault resistance (b) the phase voltages and neutral voltage from the voltage measuring bay in the case of low fault resistance (c) the neutral voltage and the components of 50 Hz at the substation in the case of high fault resistance. In addition, the basic data of the fault occurrences were collected (data of the line, fault location, cause and so on). The data will be used in the development work of fault location and earth fault protection systems

  18. Recording real case data of earth faults in distribution lines

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S [VTT Energy, Espoo (Finland)

    1997-12-31

    The most common fault type in the electrical distribution networks is the single phase to earth fault. According to the earlier studies, for instance in Nordic countries, about 80 % of all faults are of this type. To develop the protection and fault location systems, it is important to obtain real case data of disturbances and faults which occur in the networks. For example, the earth fault initial transients can be used for earth fault location. The aim of this project was to collect and analyze real case data of the earth fault disturbances in the medium voltage distribution networks (20 kV). Therefore, data of fault occurrences were recorded at two substations, of which one has an unearthed and the other a compensated neutral, measured as follows: (a) the phase currents and neutral current for each line in the case of low fault resistance (b) the phase voltages and neutral voltage from the voltage measuring bay in the case of low fault resistance (c) the neutral voltage and the components of 50 Hz at the substation in the case of high fault resistance. In addition, the basic data of the fault occurrences were collected (data of the line, fault location, cause and so on). The data will be used in the development work of fault location and earth fault protection systems

  19. A knowledge-based approach to the evaluation of fault trees

    International Nuclear Information System (INIS)

    Hwang, Yann-Jong; Chow, Louis R.; Huang, Henry C.

    1996-01-01

    A list of critical components is useful for determining the potential problems of a complex system. However, to find this list through evaluating the fault trees is expensive and time consuming. This paper intends to propose an integrated software program which consists of a fault tree constructor, a knowledge base, and an efficient algorithm for evaluating minimal cut sets of a large fault tree. The proposed algorithm uses the approaches of top-down heuristic searching and the probability-based truncation. That makes the evaluation of fault trees obviously efficient and provides critical components for solving the potential problems in complex systems. Finally, some practical fault trees are included to illustrate the results

  20. Method research of fault diagnosis based on rough set for nuclear power plant

    International Nuclear Information System (INIS)

    Chen Zhihui; Xia Hong

    2005-01-01

    Nuclear power equipment fault feature is complicated and uncertain. Rough set theory can express and deal with vagueness and uncertainty, so that it can be introduced nuclear power fault diagnosis to analyze and process historical data to find rule of fault feature. Rough set theory treatment step: Data preprocessing, attribute reduction, attribute value reduction, rule generation. According to discernibility matrix definition and nature, we can utilize discernibility matrix in reduction algorithm that make attribute and attribute value reduction, so that it can minish algorithmic complication and simplify programming. This algorithm is applied to the nuclear power fault diagnosis to generate rules of diagnosis. Using these rules, we have diagnosed five kinds of model faults correctly. (authors)

  1. Adaptive Observer-Based Fault-Tolerant Control Design for Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Huaming Qian

    2015-01-01

    Full Text Available This study focuses on the design of the robust fault-tolerant control (FTC system based on adaptive observer for uncertain linear time invariant (LTI systems. In order to improve robustness, rapidity, and accuracy of traditional fault estimation algorithm, an adaptive fault estimation algorithm (AFEA using an augmented observer is presented. By utilizing a new fault estimator model, an improved AFEA based on linear matrix inequality (LMI technique is proposed to increase the performance. Furthermore, an observer-based state feedback fault-tolerant control strategy is designed, which guarantees the stability and performance of the faulty system. Moreover, the adaptive observer and the fault-tolerant controller are designed separately, whose performance can be considered, respectively. Finally, simulation results of an aircraft application are presented to illustrate the effectiveness of the proposed design methods.

  2. A Location-Aware Service Deployment Algorithm Based on K-Means for Cloudlets

    Directory of Open Access Journals (Sweden)

    Tyng-Yeu Liang

    2017-01-01

    Full Text Available Cloudlet recently was proposed to push data centers towards network edges for reducing the network latency of delivering cloud services to mobile devices. For the sake of user mobility, it is necessary to deploy and hand off services anytime anywhere for achieving the minimal network latency for users’ service requests. However, the cost of this solution usually is too high for service providers and is not effective for resource exploitation. To resolve this problem, we propose a location-aware service deployment algorithm based on K-means for cloudlets in this paper. Simply speaking, the proposed algorithm divides mobile devices into a number of device clusters according to the geographical location of mobile devices and then deploys service instances onto the edge cloud servers nearest to the centers of device clusters. Our performance evaluation has shown that the proposed algorithm can effectively reduce not only the network latency of edge cloud services but also the number of service instances used for satisfying the condition of tolerable network latency.

  3. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M; Haenninen, S [VTT Energy, Espoo (Finland); Seppaenen, M [North-Carelian Power Co (Finland); Antila, E; Markkila, E [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  4. Automatic Fault Characterization via Abnormality-Enhanced Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    2010-12-20

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help to identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.

  5. Location capability of a sparse regional network (RSTN) using a multi-phase earthquake location algorithm (REGLOC)

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.

    1994-01-01

    The Regional Seismic Test Network (RSTN) was deployed by the US Department of Energy (DOE) to determine whether data recorded by a regional network could be used to detect and accurately locate seismic events that might be clandestine nuclear tests. The purpose of this paper is to evaluate the location capability of the RSTN. A major part of this project was the development of the location algorithm REGLOC and application of Basian a prior statistics for determining the accuracy of the location estimates. REGLOC utilizes all identifiable phases, including backazimuth, in the location. Ninty-four events, distributed throughout the network area, detected by both the RSTN and located by local networks were used in the study. The location capability of the RSTN was evaluated by estimating the location accuracy, error ellipse accuracy, and the percentage of events that could be located, as a function of magnitude. The location accuracy was verified by comparing the RSTN results for the 94 events with published locations based on data from the local networks. The error ellipse accuracy was evaluated by determining whether the error ellipse includes the actual location. The percentage of events located was assessed by combining detection capability with location capability to determine the percentage of events that could be located within the study area. Events were located with both an average crustal model for the entire region, and with regional velocity models along with station corrections obtained from master events. Most events with a magnitude <3.0 can only be located with arrivals from one station. Their average location errors are 453 and 414 km for the average- and regional-velocity model locations, respectively. Single station locations are very unreliable because they depend on accurate backazimuth estimates, and backazimuth proved to be a very unreliable computation.

  6. A multi-objective location routing problem using imperialist competitive algorithm

    Directory of Open Access Journals (Sweden)

    Amir Mohammad Golmohammadi

    2016-06-01

    Full Text Available Nowadays, most manufacturing units try to locate their requirements and the depot vehicle routing in order to transport the goods at optimum cost. Needless to mention that the locations of the required warehouses influence on the performance of vehicle routing. In this paper, a mathematical programming model to optimize the storage location and vehicle routing are presented. The first objective function of the model minimizes the total cost associated with the transportation and storage, and the second objective function minimizes the difference distance traveled by vehicles. The study uses Imperialist Competitive Algorithm (ICA to solve the resulted problems in different sizes. The preliminary results have indicated that the proposed study has performed better than NSGA-II and PAES methods in terms of Quality metric and Spacing metric.

  7. Fault Diagnosis System of Induction Motors Based on Neural Network and Genetic Algorithm Using Stator Current Signals

    Directory of Open Access Journals (Sweden)

    Tian Han

    2006-01-01

    Full Text Available This paper proposes an online fault diagnosis system for induction motors through the combination of discrete wavelet transform (DWT, feature extraction, genetic algorithm (GA, and neural network (ANN techniques. The wavelet transform improves the signal-to-noise ratio during a preprocessing. Features are extracted from motor stator current, while reducing data transfers and making online application available. GA is used to select the most significant features from the whole feature database and optimize the ANN structure parameter. Optimized ANN is trained and tested by the selected features of the measurement data of stator current. The combination of advanced techniques reduces the learning time and increases the diagnosis accuracy. The efficiency of the proposed system is demonstrated through motor faults of electrical and mechanical origins on the induction motors. The results of the test indicate that the proposed system is promising for the real-time application.

  8. Integration of Fault Detection and Isolation with Control Using Neuro-fuzzy Scheme

    Directory of Open Access Journals (Sweden)

    A. Asokan

    2009-10-01

    Full Text Available In this paper an algorithms is developed for fault diagnosis and fault tolerant control strategy for nonlinear systems subjected to an unknown time-varying fault. At first, the design of fault diagnosis scheme is performed using model based fault detection technique. The neuro-fuzzy chi-square scheme is applied for fault detection and isolation. The fault magnitude and time of occurrence of fault is obtained through neuro-fuzzy chi-square scheme. The estimated magnitude of the fault magnitude is normalized and used by the feed-forward control algorithm to make appropriate changes in the manipulated variable to keep the controlled variable near its set value. The feed-forward controller acts along with feed-back controller to control the multivariable system. The performance of the proposed scheme is applied to a three- tank process for various types of fault inputs to show the effectiveness of the proposed approach.

  9. Genetic algorithm based on optimization of neural network structure for fault diagnosis of the clutch retainer mechanism of MF 285 tractor

    Directory of Open Access Journals (Sweden)

    S. F Mousavi

    2016-09-01

    Full Text Available Introduction The diagnosis of agricultural machinery faults must be performed at an opportune time, in order to fulfill the agricultural operations in a timely manner and to optimize the accuracy and the integrity of a system, proper monitoring and fault diagnosis of the rotating parts is required. With development of fault diagnosis methods of rotating equipment, especially bearing failure, the security, performance and availability of machines has been increasing. In general, fault detection is conducted through a specific procedure which starts with data acquisition and continues with features extraction, and subsequently failure of the machine would be detected. Several practical methods have been introduced for fault detection in rotating parts of machineries. The review of the literature shows that both Artificial Neural Networks (ANN and Support Vector Machines (SVM have been used for this purpose. However, the results show that SVM is more effective than Artificial Neural Networks in fault detection of such machineries. In some smart detection systems, incorporating an optimized method such as Genetic Algorithm in the Neural Network model, could improve the fault detection procedure. Consequently, the fault detection performance of neural networks may also be improved by combining with the Genetic Algorithm and hence will be comparable with the performance of the Support Vector Machine. In this study, the so called Genetic Algorithm (GA method was used to optimize the structure of the Artificial Neural Networks (ANN for fault detection of the clutch retainer mechanism of Massey Ferguson 285 tractor. Materials and Methods The test rig consists of some electro mechanical parts including the clutch retainer mechanism of Massey Ferguson 285 tractor, a supporting shaft, a single-phase electric motor, a loading mechanism to model the load of the tractor clutch and the corresponding power train gears. The data acquisition section consists of a

  10. Incipient fault detection and identification in process systems using accelerating neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Muthusami, J.; Atiya, A.F.

    1994-01-01

    The objective of this paper is to present the development and numerical testing of a robust fault detection and identification (FDI) system using artificial neural networks (ANNs), for incipient (slowly developing) faults occurring in process systems. The challenge in using ANNs in FDI systems arises because of one's desire to detect faults of varying severity, faults from noisy sensors, and multiple simultaneous faults. To address these issues, it becomes essential to have a learning algorithm that ensures quick convergence to a high level of accuracy. A recently developed accelerated learning algorithm, namely a form of an adaptive back propagation (ABP) algorithm, is used for this purpose. The ABP algorithm is used for the development of an FDI system for a process composed of a direct current motor, a centrifugal pump, and the associated piping system. Simulation studies indicate that the FDI system has significantly high sensitivity to incipient fault severity, while exhibiting insensitivity to sensor noise. For multiple simultaneous faults, the FDI system detects the fault with the predominant signature. The major limitation of the developed FDI system is encountered when it is subjected to simultaneous faults with similar signatures. During such faults, the inherent limitation of pattern-recognition-based FDI methods becomes apparent. Thus, alternate, more sophisticated FDI methods become necessary to address such problems. Even though the effectiveness of pattern-recognition-based FDI methods using ANNs has been demonstrated, further testing using real-world data is necessary

  11. Dynamics Modeling and Analysis of Local Fault of Rolling Element Bearing

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2015-01-01

    Full Text Available This paper presents a nonlinear vibration model of rolling element bearings with 5 degrees of freedom based on Hertz contact theory and relevant bearing knowledge of kinematics and dynamics. The slipping of ball, oil film stiffness, and the nonlinear time-varying stiffness of the bearing are taken into consideration in the model proposed here. The single-point local fault model of rolling element bearing is introduced into the nonlinear model with 5 degrees of freedom according to the loss of the contact deformation of ball when it rolls into and out of the local fault location. The functions of spall depth corresponding to defects of different shapes are discussed separately in this paper. Then the ode solver in Matlab is adopted to perform a numerical solution on the nonlinear vibration model to simulate the vibration response of the rolling elements bearings with local fault. The simulation signals analysis results show a similar behavior and pattern to that observed in the processed experimental signals of rolling element bearings in both time domain and frequency domain which validated the nonlinear vibration model proposed here to generate typical rolling element bearings local fault signals for possible and effective fault diagnostic algorithms research.

  12. Approximation algorithms for facility location problems with a special class of subadditive cost functions

    NARCIS (Netherlands)

    Gabor, Adriana F.; van Ommeren, Jan C.W.

    2006-01-01

    In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present three facility location problems with stochastic demand and exponential servers, respectively inventory. We present a $(1+\\varepsilon, 1)$-reduction of

  13. Geophysical and isotopic mapping of preexisting crustal structures that influenced the location and development of the San Jacinto fault zone, southern California

    Science.gov (United States)

    Langenheim, V.E.; Jachens, R.C.; Morton, D.M.; Kistler, R.W.; Matti, J.C.

    2004-01-01

    We examine the role of preexisting crustal structure within the Peninsular Ranges batholith on determining the location of the San Jacinto fault zone by analysis of geophysical anomalies and initial strontium ratio data. A 1000-km-long boundary within the Peninsular Ranges batholith, separating relatively mafic, dense, and magnetic rocks of the western Peninsular Ranges batholith from the more felsic, less dense, and weakly magnetic rocks of the eastern Peninsular Ranges batholith, strikes north-northwest toward the San Jacinto fault zone. Modeling of the gravity and magnetic field anomalies caused by this boundary indicates that it extends to depths of at least 20 km. The anomalies do not cross the San Jacinto fault zone, but instead trend northwesterly and coincide with the fault zone. A 75-km-long gradient in initial strontium ratios (Sri) in the eastern Peninsular Ranges batholith coincides with the San Jacinto fault zone. Here rocks east of the fault are characterized by Sri greater than 0.706, indicating a source of largely continental crust, sedimentary materials, or different lithosphere. We argue that the physical property contrast produced by the Peninsular Ranges batholith boundary provided a mechanically favorable path for the San Jacinto fault zone, bypassing the San Gorgonio structural knot as slip was transferred from the San Andreas fault 1.0-1.5 Ma. Two historical M6.7 earthquakes may have nucleated along the Peninsular Ranges batholith discontinuity in San Jacinto Valley, suggesting that Peninsular Ranges batholith crustal structure may continue to affect how strain is accommodated along the San Jacinto fault zone. ?? 2004 Geological Society of America.

  14. An inverse source location algorithm for radiation portal monitor applications

    International Nuclear Information System (INIS)

    Miller, Karen A.; Charlton, William S.

    2010-01-01

    Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.

  15. Support vector machine based fault classification and location of a long transmission line

    Directory of Open Access Journals (Sweden)

    Papia Ray

    2016-09-01

    Full Text Available This paper investigates support vector machine based fault type and distance estimation scheme in a long transmission line. The planned technique uses post fault single cycle current waveform and pre-processing of the samples is done by wavelet packet transform. Energy and entropy are obtained from the decomposed coefficients and feature matrix is prepared. Then the redundant features from the matrix are taken out by the forward feature selection method and normalized. Test and train data are developed by taking into consideration variables of a simulation situation like fault type, resistance path, inception angle, and distance. In this paper 10 different types of short circuit fault are analyzed. The test data are examined by support vector machine whose parameters are optimized by particle swarm optimization method. The anticipated method is checked on a 400 kV, 300 km long transmission line with voltage source at both the ends. Two cases were examined with the proposed method. The first one is fault very near to both the source end (front and rear and the second one is support vector machine with and without optimized parameter. Simulation result indicates that the anticipated method for fault classification gives high accuracy (99.21% and least fault distance estimation error (0.29%.

  16. Improved protection system for phase faults on marine vessels based on ratio between negative sequence and positive sequence of the fault current

    DEFF Research Database (Denmark)

    Ciontea, Catalin-Iosif; Hong, Qiteng; Booth, Campbell

    2018-01-01

    algorithm is implemented in a programmable digital relay embedded in a hardware-in-the-loop (HIL) test set-up that emulates a typical maritime feeder using a real-time digital simulator. The HIL set-up allows testing of the new protection method under a wide range of faults and network conditions......This study presents a new method to protect the radial feeders on marine vessels. The proposed protection method is effective against phase–phase (PP) faults and is based on evaluation of the ratio between the negative sequence and positive sequence of the fault currents. It is shown...... that the magnitude of the introduced ratio increases significantly during the PP fault, hence indicating the fault presence in an electric network. Here, the theoretical background of the new method of protection is firstly discussed, based on which the new protection algorithm is described afterwards. The proposed...

  17. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding.

    Science.gov (United States)

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-07-06

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches.

  18. Approximation algorithms for facility location problems with a special class of subadditive cost functions

    NARCIS (Netherlands)

    Gabor, A.F.; Ommeren, van J.C.W.

    2006-01-01

    In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present three facility location problems with stochastic demand and exponential servers, respectively inventory. We present a (1+e,1)-reduction of the facility

  19. Communication-based fault handling scheme for ungrounded distribution systems

    International Nuclear Information System (INIS)

    Yang, X.; Lim, S.I.; Lee, S.J.; Choi, M.S.

    2006-01-01

    The requirement for high quality and highly reliable power supplies has been increasing as a result of increasing demand for power. At the time of a fault occurrence in a distribution system, some protection method would be dedicated to fault section isolation and service restoration. However, if there are many outage areas when the protection method is performed, it is an inconvenience to the customer. A conventional method to determine a fault section in ungrounded systems requires many successive outage invocations. This paper proposed an efficient fault section isolation method and service restoration method for single line-to-ground fault in an ungrounded distribution system that was faster than the conventional one using the information exchange between connected feeders. The proposed algorithm could be performed without any power supply interruption and could decrease the number of switching operations, so that customers would not experience outages very frequently. The method involved the use of an intelligent communication method and a sequential switching control scheme. The proposed algorithm was also applied in both a single-tie and multi-tie distribution system. This proposed algorithm has been verified through fault simulations in a simple model of ungrounded multi-tie distribution system. The method proposed in this paper was proven to offer more efficient fault identification and much less outage time than the conventional method. The proposed method could contribute to a system design since it is valid in multi-tie systems. 5 refs., 2 tabs., 8 figs

  20. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  1. Locating one pairwise interaction: Three recursive constructions

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2016-09-01

    Full Text Available In a complex component-based system, choices (levels for components (factors may interact tocause faults in the system behaviour. When faults may be caused by interactions among few factorsat specific levels, covering arrays provide a combinatorial test suite for discovering the presence offaults. While well studied, covering arrays do not enable one to determine the specific levels of factorscausing the faults; locating arrays ensure that the results from test suite execution suffice to determinethe precise levels and factors causing faults, when the number of such causes is small. Constructionsfor locating arrays are at present limited to heuristic computational methods and quite specific directconstructions. In this paper three recursive constructions are developed for locating arrays to locateone pairwise interaction causing a fault.

  2. Fault Detection for Shipboard Monitoring and Decision Support Systems

    DEFF Research Database (Denmark)

    Lajic, Zoran; Nielsen, Ulrik Dam

    2009-01-01

    In this paper a basic idea of a fault-tolerant monitoring and decision support system will be explained. Fault detection is an important part of the fault-tolerant design for in-service monitoring and decision support systems for ships. In the paper, a virtual example of fault detection...... will be presented for a containership with a real decision support system onboard. All possible faults can be simulated and detected using residuals and the generalized likelihood ratio (GLR) algorithm....

  3. A fault diagnosis prototype for a bioreactor for bioinsecticide production

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1995-01-01

    The objective of this work is to develop an algorithm for fault diagnosis in a process of animal cell cultivation, for bioinsecticide production. Generally, these processes are batch processes. It is a fact that the diagnosis for a batch process involves a division of the process evolution (time horizon) into partial processes, which are defined as pseudocontinuous blocks. Therefore, a PCB represents the evolution of the system in a time interval where it has a qualitative behavior similar to a continuous one. Thus, each PCB, in which the process is divided, can be handled in a conventional way (like continuous processes). The process model, for each PCB, is a Signed Directed Graph (SDG). To achieve generality and to allow the computational implementation, the modular approach was used in the synthesis of the bioreactor digraph. After that, the SDGs were used to carry out qualitative simulations of faults. The achieved results are the fault patterns. A special fault symptom dictionary - SM - has been adopted as data base organization for fault patterns storage. An effective algorithm is presented for the searching process of fault patterns. The system studied, as a particular application, is a bioreactor for cell cultivation for bioinsecticide production. During this work, we concentrate on the SDG construction, and 3btaining real fault patterns by the elimination of spurious patterns. The algorithm has proved to be effective in both senses, resolution and accuracy, to diagnose different kinds of simulated faults

  4. Fault detection using transmission tomography - Evaluation on the Experimental Platform of Tournemire

    International Nuclear Information System (INIS)

    Vi-Nhu-Ba, Elise

    2014-01-01

    Deep argillaceous formations have physical properties adapted to the radioactive waste disposal but their permeability properties can be modified by the presence of fractured zones; detection of these faulted zones are thus of primary importance. Several experiments have been led by IRSN in the Experimental Platform of Tournemire where faults with small vertical offsets in the deep argillaceous formation have been identified from underground installations. Some previous studies have shown the difficulty to detect this fractured zone from surface acquisitions using reflection or refraction seismic but also with electrical methods. We here propose a new seismic transmission acquisition geometry in where seismic sources are deployed at the surface and receivers are installed in the underground installations. In the scope to process these data, a new tomography algorithm has been developed in order to control the inversion parameters and also to introduce a priori information. Several synthetic tests have been led to reliably analyze the results in terms of resolution and relevance of the final image. A discontinuity of the seismic velocities in the limestones and argillites of the Tournemire Platform is evidenced for the first time by applying the algorithm to the data recently acquired. This low velocity anomaly is located just above the fracture zone visible from the underground installations and its location is also consistent with observations from the surface. (author)

  5. An improved cut-and-solve algorithm for the single-source capacitated facility location problem

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Klose, Andreas; Nielsen, Lars Relund

    2018-01-01

    In this paper, we present an improved cut-and-solve algorithm for the single-source capacitated facility location problem. The algorithm consists of three phases. The first phase strengthens the integer program by a cutting plane algorithm to obtain a tight lower bound. The second phase uses a two......-level local branching heuristic to find an upper bound, and if optimality has not yet been established, the third phase uses the cut-and-solve framework to close the optimality gap. Extensive computational results are reported, showing that the proposed algorithm runs 10–80 times faster on average compared...

  6. Model-based fault diagnosis approach on external short circuit of lithium-ion battery used in electric vehicles

    International Nuclear Information System (INIS)

    Chen, Zeyu; Xiong, Rui; Tian, Jinpeng; Shang, Xiong; Lu, Jiahuan

    2016-01-01

    Highlights: • The characteristics of ESC fault of lithium-ion battery are investigated experimentally. • The proposed method to simulate the electrical behavior of ESC fault is viable. • Ten parameters in the presented fault model were optimized using a DPSO algorithm. • A two-layer model-based fault diagnosis approach for battery ESC is proposed. • The effective and robustness of the proposed algorithm has been evaluated. - Abstract: This study investigates the external short circuit (ESC) fault characteristics of lithium-ion battery experimentally. An experiment platform is established and the ESC tests are implemented on ten 18650-type lithium cells considering different state-of-charges (SOCs). Based on the experiment results, several efforts have been made. (1) The ESC process can be divided into two periods and the electrical and thermal behaviors within these two periods are analyzed. (2) A modified first-order RC model is employed to simulate the electrical behavior of the lithium cell in the ESC fault process. The model parameters are re-identified by a dynamic-neighborhood particle swarm optimization algorithm. (3) A two-layer model-based ESC fault diagnosis algorithm is proposed. The first layer conducts preliminary fault detection and the second layer gives a precise model-based diagnosis. Four new cells are short-circuited to evaluate the proposed algorithm. It shows that the ESC fault can be diagnosed within 5 s, the error between the model and measured data is less than 0.36 V. The effectiveness of the fault diagnosis algorithm is not sensitive to the precision of battery SOC. The proposed algorithm can still make the correct diagnosis even if there is 10% error in SOC estimation.

  7. Fault Detection for Automotive Shock Absorber

    Science.gov (United States)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  8. Modularization of fault trees: a method to reduce the cost of analysis

    International Nuclear Information System (INIS)

    Chatterjee, P.

    1975-01-01

    The problem of analyzing large fault trees is considered. The concept of the finest modular representation of a fault tree is introduced and an algorithm is presented for finding this representation. The algorithm will also identify trees which cannot be modularized. Applications of such modularizations are discussed

  9. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    Science.gov (United States)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  10. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  11. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    Science.gov (United States)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  12. A New Fault Location Approach for Acoustic Emission Techniques in Wind Turbines

    Directory of Open Access Journals (Sweden)

    Carlos Quiterio Gómez Muñoz

    2016-01-01

    Full Text Available The renewable energy industry is undergoing continuous improvement and development worldwide, wind energy being one of the most relevant renewable energies. This industry requires high levels of reliability, availability, maintainability and safety (RAMS for wind turbines. The blades are critical components in wind turbines. The objective of this research work is focused on the fault detection and diagnosis (FDD of the wind turbine blades. The FDD approach is composed of a robust condition monitoring system (CMS and a novel signal processing method. CMS collects and analyses the data from different non-destructive tests based on acoustic emission. The acoustic emission signals are collected applying macro-fiber composite (MFC sensors to detect and locate cracks on the surface of the blades. Three MFC sensors are set in a section of a wind turbine blade. The acoustic emission signals are generated by breaking a pencil lead in the blade surface. This method is used to simulate the acoustic emission due to a breakdown of the composite fibers. The breakdown generates a set of mechanical waves that are collected by the MFC sensors. A graphical method is employed to obtain a system of non-linear equations that will be used for locating the emission source. This work demonstrates that a fiber breakage in the wind turbine blade can be detected and located by using only three low cost sensors. It allows the detection of potential failures at an early stages, and it can also reduce corrective maintenance tasks and downtimes and increase the RAMS of the wind turbine.

  13. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2014-01-01

    Full Text Available In order to solve the problems of the existing wide-area backup protection (WABP algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance.

  14. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    Science.gov (United States)

    Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun

    2014-01-01

    In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399

  15. Reliable Fault Diagnosis of Rotary Machine Bearings Using a Stacked Sparse Autoencoder-Based Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Muhammad Sohaib

    2018-01-01

    Full Text Available Due to enhanced safety, cost-effectiveness, and reliability requirements, fault diagnosis of bearings using vibration acceleration signals has been a key area of research over the past several decades. Many fault diagnosis algorithms have been developed that can efficiently classify faults under constant speed conditions. However, the performances of these traditional algorithms deteriorate with fluctuations of the shaft speed. In the past couple of years, deep learning algorithms have not only improved the classification performance in various disciplines (e.g., in image processing and natural language processing, but also reduced the complexity of feature extraction and selection processes. In this study, using complex envelope spectra and stacked sparse autoencoder- (SSAE- based deep neural networks (DNNs, a fault diagnosis scheme is developed that can overcome fluctuations of the shaft speed. The complex envelope spectrum made the frequency components associated with each fault type vibrant, hence helping the autoencoders to learn the characteristic features from the given input signals more readily. Moreover, the implementation of SSAE-DNN for bearing fault diagnosis has avoided the need of handcrafted features that are used in traditional fault diagnosis schemes. The experimental results demonstrate that the proposed scheme outperforms conventional fault diagnosis algorithms in terms of fault classification accuracy when tested with variable shaft speed data.

  16. A hybrid nested partitions algorithm for banking facility location problems

    KAUST Repository

    Xia, Li

    2010-07-01

    The facility location problem has been studied in many industries including banking network, chain stores, and wireless network. Maximal covering location problem (MCLP) is a general model for this type of problems. Motivated by a real-world banking facility optimization project, we propose an enhanced MCLP model which captures the important features of this practical problem, namely, varied costs and revenues, multitype facilities, and flexible coverage functions. To solve this practical problem, we apply an existing hybrid nested partitions algorithm to the large-scale situation. We further use heuristic-based extensions to generate feasible solutions more efficiently. In addition, the upper bound of this problem is introduced to study the quality of solutions. Numerical results demonstrate the effectiveness and efficiency of our approach. © 2010 IEEE.

  17. Fault Isolation and quality assessment for shipboard monitoring

    DEFF Research Database (Denmark)

    Lajic, Zoran; Nielsen, Ulrik Dam; Blanke, Mogens

    2010-01-01

    system and to improve multi-sensor data fusion for the particular system. Fault isolation is an important part of the fault tolerant design for in-service monitoring and decision support systems for ships. In the paper, a virtual example of fault isolation will be presented. Several possible faults...... will be simulated and isolated using residuals and the generalized likelihood ratio (GLR) algorithm. It will be demonstrated that the approach can be used to increase accuracy of sea state estimations employing sensor fusion quality test....

  18. Quantitative evaluation of fault coverage for digitalized systems in NPPs using simulated fault injection method

    International Nuclear Information System (INIS)

    Kim, Suk Joon

    2004-02-01

    Even though digital systems have numerous advantages such as precise processing of data, enhanced calculation capability over the conventional analog systems, there is a strong restriction on the application of digital systems to the safety systems in nuclear power plants (NPPs). This is because we do not fully understand the reliability of digital systems, and therefore we cannot guarantee the safety of digital systems. But, as the need for introduction of digital systems to safety systems in NPPs increasing, the need for the quantitative analysis on the safety of digital systems is also increasing. NPPs, which are quite conservative in terms of safety, require proving the reliability of digital systems when applied them to the NPPs. Moreover, digital systems which are applied to the NPPs are required to increase the overall safety of NPPs. however, it is very difficult to evaluate the reliability of digital systems because they include the complex fault processing mechanisms at various levels of the systems. Software is another obstacle in reliability assessment of the systems that requires ultra-high reliability. In this work, the fault detection coverage for the digital system is evaluated using simulated fault injection method. The target system is the Local Coincidence Logic (LCL) processor in Digital Plant Protection System (DPPS). However, as the LCL processor is difficult to design equally for evaluating the fault detection coverage, the LCL system has to be simplified. The simulations for evaluating the fault detection coverage of components are performed by dividing into two cases and the failure rates of components are evaluated using MIL-HDBK-217F. Using these results, the fault detection coverage of simplified LCL system is evaluated. In the experiments, heartbeat signals were just emitted at regular interval after executing logic without self-checking algorithm. When faults are injected into the simplified system, fault occurrence can be detected by

  19. Fault-tolerant clock synchronization in distributed systems

    Science.gov (United States)

    Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.

    1990-01-01

    Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.

  20. Faults detection approach using PCA and SOM algorithm in PMSG-WT system

    Directory of Open Access Journals (Sweden)

    Mohamed Lamine FADDA

    2016-07-01

    Full Text Available In this paper, a new approach for faults detection in observable data system wind turbine - permanent magnet synchronous generator (WT-PMSG, the studying objective, illustrate the combination (SOM-PCA to build Multi-local-PCA models faults detection in system (WT-PMSG, the performance of the method suggested to faults detection in system data, finding good results in simulation experiment.

  1. Development of Data Processing Algorithms for the Upgraded LHCb Vertex Locator

    CERN Document Server

    AUTHOR|(CDS)2101352

    The LHCb detector will see a major upgrade during LHC Long Shutdown II, which is planned for 2019/20. The silicon Vertex Locator subdetector will be upgraded for operation under the new run conditions. The detector will be read out using a data acquisition board based on an FPGA. The work presented in this thesis is concerned with the development of the data processing algorithms to be used in this data acquisition board. In particular, work in three different areas of the FPGA is covered: the data processing block, the low level interface, and the post router block. The algorithms produced have been simulated and tested, and shown to provide the required performance. Errors in the initial implementation of the Gigabit Wireline Transmitter serialized data in the low level interface were discovered and corrected. The data scrambling algorithm and the post router block have been incorporated in the front end readout chip.

  2. Protection algorithm for a wind turbine generator based on positive- and negative-sequence fault components

    DEFF Research Database (Denmark)

    Zheng, Tai-Ying; Cha, Seung-Tae; Crossley, Peter A.

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on positive- and negative-sequence fault components is proposed in the paper. The relay uses the magnitude of the positive-sequence component in the fault current to detect a fault on a parallel WTG, connected to the same power collection...... feeder, or a fault on an adjacent feeder; but for these faults, the relay remains stable and inoperative. A fault on the power collection feeder or a fault on the collection bus, both of which require an instantaneous tripping response, are distinguished from an inter-tie fault or a grid fault, which...... in the fault current is used to decide on either instantaneous or delayed operation. The operating performance of the relay is then verified using various fault scenarios modelled using EMTP-RV. The scenarios involve changes in the position and type of fault, and the faulted phases. Results confirm...

  3. Research on the target coverage algorithms for 3D curved surface

    International Nuclear Information System (INIS)

    Sun, Shunyuan; Sun, Li; Chen, Shu

    2016-01-01

    To solve the target covering problems in three-dimensional space, putting forward a deployment strategies of the target points innovatively, and referencing to the differential evolution (DE) algorithm to optimize the location coordinates of the sensor nodes to realize coverage of all the target points in 3-D surface with minimal sensor nodes. Firstly, building the three-dimensional perception model of sensor nodes, and putting forward to the blind area existing in the process of the sensor nodes sensing the target points in 3-D surface innovatively, then proving the feasibility of solving the target coverage problems in 3-D surface with DE algorithm theoretically, and reflecting the fault tolerance of the algorithm.

  4. Bayesian fault detection and isolation using Field Kalman Filter

    Science.gov (United States)

    Baranowski, Jerzy; Bania, Piotr; Prasad, Indrajeet; Cong, Tian

    2017-12-01

    Fault detection and isolation is crucial for the efficient operation and safety of any industrial process. There is a variety of methods from all areas of data analysis employed to solve this kind of task, such as Bayesian reasoning and Kalman filter. In this paper, the authors use a discrete Field Kalman Filter (FKF) to detect and recognize faulty conditions in a system. The proposed approach, devised for stochastic linear systems, allows for analysis of faults that can be expressed both as parameter and disturbance variations. This approach is formulated for the situations when the fault catalog is known, resulting in the algorithm allowing estimation of probability values. Additionally, a variant of algorithm with greater numerical robustness is presented, based on computation of logarithmic odds. Proposed algorithm operation is illustrated with numerical examples, and both its merits and limitations are critically discussed and compared with traditional EKF.

  5. A survey of fault diagnosis technology

    Science.gov (United States)

    Riedesel, Joel

    1989-01-01

    Existing techniques and methodologies for fault diagnosis are surveyed. The techniques run the gamut from theoretical artificial intelligence work to conventional software engineering applications. They are shown to define a spectrum of implementation alternatives where tradeoffs determine their position on the spectrum. Various tradeoffs include execution time limitations and memory requirements of the algorithms as well as their effectiveness in addressing the fault diagnosis problem.

  6. Fault Diagnosis and Fault-Tolerant Control of Wind Turbines via a Discrete Time Controller with a Disturbance Compensator

    Directory of Open Access Journals (Sweden)

    Yolanda Vidal

    2015-05-01

    Full Text Available This paper develops a fault diagnosis (FD and fault-tolerant control (FTC of pitch actuators in wind turbines. This is accomplished by combining a disturbance compensator with a controller, both of which are formulated in the discrete time domain. The disturbance compensator has a dual purpose: to estimate the actuator fault (which is used by the FD algorithm and to design the discrete time controller to obtain an FTC. That is, the pitch actuator faults are estimated, and then, the pitch control laws are appropriately modified to achieve an FTC with a comparable behavior to the fault-free case. The performance of the FD and FTC schemes is tested in simulations with the aero-elastic code FAST.

  7. Searching for Seismically Active Faults in the Gulf of Cadiz

    Science.gov (United States)

    Custodio, S.; Antunes, V.; Arroucau, P.

    2015-12-01

    The repeated occurrence of large magnitude earthquakes in southwest Iberia in historical and instrumental times suggests the presence of active fault segments in the region. However, due to an apparently diffuse seismicity pattern defining a broad region of distributed deformation west of Gibraltar Strait, the question of the location, dimension and geometry of such structures is still open to debate. We recently developed a new algorithm for earthquake location in 3D complex media with laterally varying interface depths, which allowed us to relocate 2363 events having occurred from 2007 to 2013, using P- and S-wave catalog arrival times obtained from the Portuguese Meteorological Institute (IPMA, Instituto Portugues do Mar e da Atmosfera), for a study area lying between 8.5˚W and 5˚W in longitude and 36˚ and 37.5˚ in latitude. The most remarkable change in the seismicity pattern after relocation is an apparent concentration of events, in the North of the Gulf of Cadiz, along a low angle northward-dipping plane rooted at the base of the crust, which could indicate the presence of a major fault. If confirmed, this would be the first structure clearly illuminated by seismicity in a region that has unleashed large magnitude earthquakes. Here, we present results from the joint analysis of focal mechanism solutions and waveform similarity between neighboring events from waveform cross-correlation in order to assess whether those earthquakes occur on the same fault plane.

  8. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    Science.gov (United States)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  9. AN ISOMETRIC MAPPING BASED CO-LOCATION DECISION TREE ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Zhou

    2018-05-01

    Full Text Available Decision tree (DT induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT, which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1 The extraction method of exposed carbonate rocks is of high accuracy. (2 The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  10. Wilshire fault: Earthquakes in Hollywood?

    Science.gov (United States)

    Hummon, Cheryl; Schneider, Craig L.; Yeats, Robert S.; Dolan, James F.; Sieh, Kerry E.; Huftile, Gary J.

    1994-04-01

    The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

  11. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  12. Algorithm for locating the extremum of a multi-dimensional constrained function and its application to the PPPL Hybrid Study

    International Nuclear Information System (INIS)

    Bathke, C.

    1978-03-01

    A description is presented of a general algorithm for locating the extremum of a multi-dimensional constrained function. The algorithm employs a series of techniques dominated by random shrinkage, steepest descent, and adaptive creeping. A discussion follows of the algorithm's application to a ''real world'' problem, namely the optimization of the price of electricity, P/sub eh/, from a hybrid fusion-fission reactor. Upon the basis of comparisons with other optimization schemes of a survey nature, the algorithm is concluded to yield a good approximation to the location of a function's optimum

  13. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  14. Optimal Location and Sizing of UPQC in Distribution Networks Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Seyed Abbas Taher

    2012-01-01

    Full Text Available Differential evolution (DE algorithm is used to determine optimal location of unified power quality conditioner (UPQC considering its size in the radial distribution systems. The problem is formulated to find the optimum location of UPQC based on an objective function (OF defined for improving of voltage and current profiles, reducing power loss and minimizing the investment costs considering the OF's weighting factors. Hence, a steady-state model of UPQC is derived to set in forward/backward sweep load flow. Studies are performed on two IEEE 33-bus and 69-bus standard distribution networks. Accuracy was evaluated by reapplying the procedures using both genetic (GA and immune algorithms (IA. Comparative results indicate that DE is capable of offering a nearer global optimal in minimizing the OF and reaching all the desired conditions than GA and IA.

  15. Data Structures: Sequence Problems, Range Queries, and Fault Tolerance

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund

    performance and money in the design of todays high speed memory technologies. Hardware, power failures, and environmental conditions such as cosmic rays and alpha particles can all alter the memory in unpredictable ways. In applications where large memory capacities are needed at low cost, it makes sense......The focus of this dissertation is on algorithms, in particular data structures that give provably ecient solutions for sequence analysis problems, range queries, and fault tolerant computing. The work presented in this dissertation is divided into three parts. In Part I we consider algorithms...... to assume that the algorithms themselves are in charge for dealing with memory faults. We investigate searching, sorting and counting algorithms and data structures that provably returns sensible information in spite of memory corruptions....

  16. Fault tolerant control of multivariable processes using auto-tuning PID controller.

    Science.gov (United States)

    Yu, Ding-Li; Chang, T K; Yu, Ding-Wen

    2005-02-01

    Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.

  17. A method of reconstructing complex stratigraphic surfaces with multitype fault constraints

    Science.gov (United States)

    Deng, Shi-Wu; Jia, Yu; Yao, Xing-Miao; Liu, Zhi-Ning

    2017-06-01

    The construction of complex stratigraphic surfaces is widely employed in many fields, such as petroleum exploration, geological modeling, and geological structure analysis. It also serves as an important foundation for data visualization and visual analysis in these fields. The existing surface construction methods have several deficiencies and face various difficulties, such as the presence of multitype faults and roughness of resulting surfaces. In this paper, a surface modeling method that uses geometric partial differential equations (PDEs) is introduced for the construction of stratigraphic surfaces. It effectively solves the problem of surface roughness caused by the irregularity of stratigraphic data distribution. To cope with the presence of multitype complex faults, a two-way projection algorithm between threedimensional space and a two-dimensional plane is proposed. Using this algorithm, a unified method based on geometric PDEs is developed for dealing with multitype faults. Moreover, the corresponding geometric PDE is derived, and an algorithm based on an evolutionary solution is developed. The algorithm proposed for constructing spatial surfaces with real data verifies its computational efficiency and its ability to handle irregular data distribution. In particular, it can reconstruct faulty surfaces, especially those with overthrust faults.

  18. Adaptive Fault-Tolerant Routing in 2D Mesh with Cracky Rectangular Model

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2014-01-01

    Full Text Available This paper mainly focuses on routing in two-dimensional mesh networks. We propose a novel faulty block model, which is cracky rectangular block, for fault-tolerant adaptive routing. All the faulty nodes and faulty links are surrounded in this type of block, which is a convex structure, in order to avoid routing livelock. Additionally, the model constructs the interior spanning forest for each block in order to keep in touch with the nodes inside of each block. The procedure for block construction is dynamically and totally distributed. The construction algorithm is simple and ease of implementation. And this is a fully adaptive block which will dynamically adjust its scale in accordance with the situation of networks, either the fault emergence or the fault recovery, without shutdown of the system. Based on this model, we also develop a distributed fault-tolerant routing algorithm. Then we give the formal proof for this algorithm to guarantee that messages will always reach their destinations if and only if the destination nodes keep connecting with these mesh networks. So the new model and routing algorithm maximize the availability of the nodes in networks. This is a noticeable overall improvement of fault tolerability of the system.

  19. Fault trees for decision making in systems analysis

    International Nuclear Information System (INIS)

    Lambert, H.E.

    1975-01-01

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut sets according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure. (80 figures, 20 tables)

  20. Model Based Fault Detection in a Centrifugal Pump Application

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...

  1. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  2. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  3. Fault Reconnaissance Agent for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Elhadi M. Shakshuki

    2010-01-01

    Full Text Available One of the key prerequisite for a scalable, effective and efficient sensor network is the utilization of low-cost, low-overhead and high-resilient fault-inference techniques. To this end, we propose an intelligent agent system with a problem solving capability to address the issue of fault inference in sensor network environments. The intelligent agent system is designed and implemented at base-station side. The core of the agent system – problem solver – implements a fault-detection inference engine which harnesses Expectation Maximization (EM algorithm to estimate fault probabilities of sensor nodes. To validate the correctness and effectiveness of the intelligent agent system, a set of experiments in a wireless sensor testbed are conducted. The experimental results show that our intelligent agent system is able to precisely estimate the fault probability of sensor nodes.

  4. Navigation System Fault Diagnosis for Underwater Vehicle

    DEFF Research Database (Denmark)

    Falkenberg, Thomas; Gregersen, Rene Tavs; Blanke, Mogens

    2014-01-01

    This paper demonstrates fault diagnosis on unmanned underwater vehicles (UUV) based on analysis of structure of the nonlinear dynamics. Residuals are generated using dierent approaches in structural analysis followed by statistical change detection. Hypothesis testing thresholds are made signal...... based to cope with non-ideal properties seen in real data. Detection of both sensor and thruster failures are demonstrated. Isolation is performed using the residual signature of detected faults and the change detection algorithm is used to assess severity of faults by estimating their magnitude...

  5. High-resolution fault image from accurate locations and focal mechanisms of the 2008 swarm earthquakes in West Bohemia, Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Vavryčuk, Václav; Bouchaala, Fateh; Fischer, Tomáš

    2013-01-01

    Roč. 590, April (2013), s. 189-195 ISSN 0040-1951 R&D Projects: GA ČR(CZ) GAP210/12/1491; GA MŠk LM2010008 EU Projects: European Commission(XE) 230669 - AIM Institutional support: RVO:67985530 Keywords : earth quake location * failure criterion * fault friction * focal mechanism * tectonic stress Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.866, year: 2013

  6. Eigenvector/eigenvalue analysis of a 3D current referential fault detection and diagnosis of an induction motor

    International Nuclear Information System (INIS)

    Pires, V. Fernao; Martins, J.F.; Pires, A.J.

    2010-01-01

    In this paper an integrated approach for on-line induction motor fault detection and diagnosis is presented. The need to insure a continuous and safety operation for induction motors involves preventive maintenance procedures combined with fault diagnosis techniques. The proposed approach uses an automatic three step algorithm. Firstly, the induction motor stator currents are measured which will give typical patterns that can be used to identify the fault. Secondly, the eigenvectors/eigenvalues of the 3D current referential are computed. Finally the proposed algorithm will discern if the motor is healthy or not and report the extent of the fault. Furthermore this algorithm is able to identify distinct faults (stator winding faults or broken bars). The proposed approach was experimentally implemented and its performance verified on various types of working conditions.

  7. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  8. Quaternary faulting in the Tatra Mountains, evidence from cave morphology and fault-slip analysis

    Directory of Open Access Journals (Sweden)

    Szczygieł Jacek

    2015-06-01

    Full Text Available Tectonically deformed cave passages in the Tatra Mts (Central Western Carpathians indicate some fault activity during the Quaternary. Displacements occur in the youngest passages of the caves indicating (based on previous U-series dating of speleothems an Eemian or younger age for those faults, and so one tectonic stage. On the basis of stress analysis and geomorphological observations, two different mechanisms are proposed as responsible for the development of these displacements. The first mechanism concerns faults that are located above the valley bottom and at a short distance from the surface, with fault planes oriented sub-parallel to the slopes. The radial, horizontal extension and vertical σ1 which is identical with gravity, indicate that these faults are the result of gravity sliding probably caused by relaxation after incision of valleys, and not directly from tectonic activity. The second mechanism is tilting of the Tatra Mts. The faults operated under WNW-ESE oriented extension with σ1 plunging steeply toward the west. Such a stress field led to normal dip-slip or oblique-slip displacements. The faults are located under the valley bottom and/or opposite or oblique to the slopes. The process involved the pre-existing weakest planes in the rock complex: (i in massive limestone mostly faults and fractures, (ii in thin-bedded limestone mostly inter-bedding planes. Thin-bedded limestones dipping steeply to the south are of particular interest. Tilting toward the N caused the hanging walls to move under the massif and not toward the valley, proving that the cause of these movements was tectonic activity and not gravity.

  9. Searching for Active Faults in the Western Eurasia-Nubia plate boundary

    Science.gov (United States)

    Antunes, Veronica; Custodio, Susana; Arroucau, Pierre; Carrilho, Fernando

    2016-04-01

    The repeated occurrence of large magnitude earthquakes in southwest Iberia in historical and instrumental times suggests the presence of active faults in the region. However, the region undergoes slow deformation, which results in low rates of seismic activity, and the location, dimension and geometry of active structures remains unsettled. We recently developed a new algorithm for earthquake location in 3D complex media with laterally varying interface depths, which allowed us to relocate 2363 events that occurred from 2007 to 2013. The method takes as inputs P- and S-wave catalog arrival times obtained from the Portuguese Meteorological Institute (IPMA, Instituto Portugues do Mar e da Atmosfera), for a study area defined by 8.5°W < lon < 5°W and 36° < lat < 37.5°. After relocation, we obtain a lineation of events in the Guadalquivir bank region, in the northern Gulf of Cadiz. The lineation defines a low-angle northward-dipping plane rooted at the base of the crust, which could indicate the presence of a major fault. We provide seismological evidence for the existence of this seemingly active structure based on earthquake relocations, focal mechanisms and waveform similarity between neighboring events.

  10. Real-Time Fault Tolerant Networking Protocols

    National Research Council Canada - National Science Library

    Henzinger, Thomas A

    2004-01-01

    We made significant progress in the areas of video streaming, wireless protocols, mobile ad-hoc and sensor networks, peer-to-peer systems, fault tolerant algorithms, dependability and timing analysis...

  11. Integrated fault tree development environment

    International Nuclear Information System (INIS)

    Dixon, B.W.

    1986-01-01

    Probabilistic Risk Assessment (PRA) techniques are utilized in the nuclear industry to perform safety analyses of complex defense-in-depth systems. A major effort in PRA development is fault tree construction. The Integrated Fault Tree Environment (IFTREE) is an interactive, graphics-based tool for fault tree design. IFTREE provides integrated building, editing, and analysis features on a personal workstation. The design philosophy of IFTREE is presented, and the interface is described. IFTREE utilizes a unique rule-based solution algorithm founded in artificial intelligence (AI) techniques. The impact of the AI approach on the program design is stressed. IFTREE has been developed to handle the design and maintenance of full-size living PRAs and is currently in use

  12. Identification tibia and fibula bone fracture location using scanline algorithm

    Science.gov (United States)

    Muchtar, M. A.; Simanjuntak, S. E.; Rahmat, R. F.; Mawengkang, H.; Zarlis, M.; Sitompul, O. S.; Winanto, I. D.; Andayani, U.; Syahputra, M. F.; Siregar, I.; Nasution, T. H.

    2018-03-01

    Fracture is a condition that there is a damage in the continuity of the bone, usually caused by stress, trauma or weak bones. The tibia and fibula are two separated-long bones in the lower leg, closely linked at the knee and ankle. Tibia/fibula fracture often happen when there is too much force applied to the bone that it can withstand. One of the way to identify the location of tibia/fibula fracture is to read X-ray image manually. Visual examination requires more time and allows for errors in identification due to the noise in image. In addition, reading X-ray needs highlighting background to make the objects in X-ray image appear more clearly. Therefore, a method is required to help radiologist to identify the location of tibia/fibula fracture. We propose some image-processing techniques for processing cruris image and Scan line algorithm for the identification of fracture location. The result shows that our proposed method is able to identify it and reach up to 87.5% of accuracy.

  13. Sequential fault diagnosis for mechatronics system using diagnostic hybrid bond graph and composite harmony search

    Directory of Open Access Journals (Sweden)

    Ming Yu

    2015-12-01

    Full Text Available This article proposes a sequential fault diagnosis method to handle asynchronous distinct faults using diagnostic hybrid bond graph and composite harmony search. The faults under consideration include fault mode, abrupt fault, and intermittent fault. The faults can occur in different time instances, which add to the difficulty of decision making for fault diagnosis. This is because the earlier occurred fault can exhibit fault symptom which masks the fault symptom of latter occurred fault. In order to solve this problem, a sequential identification algorithm is developed in which the identification task is reactivated based on two conditions. The first condition is that the latter occurred fault has at least one inconsistent coherence vector element which is consistent in coherence vector of the earlier occurred fault, and the second condition is that the existing fault coherence vector has the ability to hide other faults and the second-level residual exceeds the threshold. A new composite harmony search which is capable of handling continuous variables and binary variables simultaneously is proposed for identification purpose. Experiments on a mobile robot system are conducted to assess the proposed sequential fault diagnosis algorithm.

  14. Location-Based Self-Adaptive Routing Algorithm for Wireless Sensor Networks in Home Automation

    Directory of Open Access Journals (Sweden)

    Hong SeungHo

    2011-01-01

    Full Text Available The use of wireless sensor networks in home automation (WSNHA is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route discovery flooding to a cylindrical request zone, which reduces the routing overhead and decreases broadcast storm problems in the MAC layer. It also automatically adjusts the size of the request zone using a self-adaptive algorithm based on Bayes' theorem. This makes WSNHA-LBAR more adaptable to the changes of the network state and easier to implement. Simulation results show improved network reliability as well as reduced routing overhead.

  15. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    Science.gov (United States)

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  16. Stafford fault system: 120 million year fault movement history of northern Virginia

    Science.gov (United States)

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  17. Integrated seismic interpretation of the Carlsberg Fault zone, Copenhagen, Denmark

    DEFF Research Database (Denmark)

    Nielsen, Lars; Thybo, Hans; Jørgensen, Mette Iwanouw

    2005-01-01

    the fault zone. The fault zone is a shadow zone to shots detonated outside the fault zone. Finite-difference wavefield modelling supports the interpretations of the fan recordings. Our fan recording approach facilitates cost-efficient mapping of fault zones in densely urbanized areas where seismic normal......We locate the concealed Carlsberg Fault zone along a 12-km-long trace in the Copenhagen city centre by seismic refraction, reflection and fan profiling. The Carlsberg Fault is located in a NNW-SSE striking fault system in the border zone between the Danish Basin and the Baltic Shield. Recent...... earthquakes indicate that this area is tectonically active. A seismic refraction study across the Carlsberg Fault shows that the fault zone is a low-velocity zone and marks a change in seismic velocity structure. A normal incidence reflection seismic section shows a coincident flower-like structure. We have...

  18. Locating Leaks with TrustRank Algorithm Support

    Directory of Open Access Journals (Sweden)

    Luísa Ribeiro

    2015-03-01

    Full Text Available This paper presents a methodology to quantify and to locate leaks. The original contribution is the use of a tool based on the TrustRank algorithm for the selection of nodes for pressure monitoring. The results from these methodologies presented here are: (I A sensitivity analysis of the number of pressure transducers on the quality of the final solution; (II A reduction of the number of pipes to be inspected; and (III A focus on the problematic pipes which allows a better office planning of the inspection works to perform atthe field. To obtain these results, a methodology for the identification of probable leaky pipes and an estimate of their leakage flows is also presented. The potential of the methodology is illustrated with several case studies, considering different levels of water losses and different sets of pressure monitoring nodes. The results are discussed and the solutions obtained show the benefits of the developed methodologies.

  19. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    Science.gov (United States)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  20. Automatic picking of direct P, S seismic phases and fault zone head waves

    Science.gov (United States)

    Ross, Z. E.; Ben-Zion, Y.

    2014-10-01

    We develop a set of algorithms for automatic detection and picking of direct P and S waves, as well as fault zone head waves (FZHW), generated by earthquakes on faults that separate different lithologies and recorded by local seismic networks. The S-wave picks are performed using polarization analysis and related filters to remove P-wave energy from the seismograms, and utilize STA/LTA and kurtosis detectors in tandem to lock on the phase arrival. The early portions of P waveforms are processed with STA/LTA, kurtosis and skewness detectors for possible first-arriving FZHW. Identification and picking of direct P and FZHW is performed by a multistage algorithm that accounts for basic characteristics (motion polarities, time difference, sharpness and amplitudes) of the two phases. The algorithm is shown to perform well on synthetic seismograms produced by a model with a velocity contrast across the fault, and observed data generated by earthquakes along the Parkfield section of the San Andreas fault and the Hayward fault. The developed techniques can be used for systematic processing of large seismic waveform data sets recorded near major faults.

  1. Simulation-driven machine learning: Bearing fault classification

    Science.gov (United States)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  2. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    Science.gov (United States)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic

  3. Earthquake disaster mitigation of Lembang Fault West Java with electromagnetic method

    International Nuclear Information System (INIS)

    Widodo

    2015-01-01

    The Lembang fault is located around eight kilometers from Bandung City, West Java, Indonesia. The existence of this fault runs through densely populated settlement and tourism area. It is an active fault structure with increasing seismic activity where the 28 August 2011 earthquake occurred. The seismic response at the site is strongly influenced by local geological conditions. The ambient noise measurements from the western part of this fault give strong implication for a complex 3-D tectonic setting. Hence, near surface Electromagnetic (EM) measurements are carried out to understand the location of the local active fault of the research area. Hence, near surface EM measurements are carried out to understand the location of the local active fault and the top of the basement structure of the research area. The Transientelectromagnetic (TEM) measurements are carried out along three profiles, which include 35 TEM soundings. The results indicate that TEM data give detailed conductivity distribution of fault structure in the study area

  4. Earthquake disaster mitigation of Lembang Fault West Java with electromagnetic method

    Energy Technology Data Exchange (ETDEWEB)

    Widodo, E-mail: widodo@gf.itb.ac.id [Geophysical Engineering, Bandung Institute of Technology, 40132, Bandung (Indonesia)

    2015-04-24

    The Lembang fault is located around eight kilometers from Bandung City, West Java, Indonesia. The existence of this fault runs through densely populated settlement and tourism area. It is an active fault structure with increasing seismic activity where the 28 August 2011 earthquake occurred. The seismic response at the site is strongly influenced by local geological conditions. The ambient noise measurements from the western part of this fault give strong implication for a complex 3-D tectonic setting. Hence, near surface Electromagnetic (EM) measurements are carried out to understand the location of the local active fault of the research area. Hence, near surface EM measurements are carried out to understand the location of the local active fault and the top of the basement structure of the research area. The Transientelectromagnetic (TEM) measurements are carried out along three profiles, which include 35 TEM soundings. The results indicate that TEM data give detailed conductivity distribution of fault structure in the study area.

  5. INVESTIGATION OF HOLOCENE FAULTING PROPOSED C-746-U LANDFILL EXPANSION

    Energy Technology Data Exchange (ETDEWEB)

    Lettis, William [William Lettis & Associates, Inc.

    2006-07-01

    This report presents the findings of a fault hazard investigation for the C-746-U landfill's proposed expansion located at the Department of Energy's (DOE) Paducah Gaseous Diffusion Plant (PGDP), in Paducah, Kentucky. The planned expansion is located directly north of the present-day C-746-U landfill. Previous geophysical studies within the PGDP site vicinity interpret possible northeast-striking faults beneath the proposed landfill expansion, although prior to this investigation the existence, locations, and ages of these inferred faults have not been confirmed through independent subsurface exploration. The purpose of this investigation is to assess whether or not Holocene-active fault displacement is present beneath the footprint of the proposed landfill expansion.

  6. Open-Switch Fault Diagnosis and Fault Tolerant for Matrix Converter with Finite Control Set-Model Predictive Control

    DEFF Research Database (Denmark)

    Peng, Tao; Dan, Hanbing; Yang, Jian

    2016-01-01

    To improve the reliability of the matrix converter (MC), a fault diagnosis method to identify single open-switch fault is proposed in this paper. The introduced fault diagnosis method is based on finite control set-model predictive control (FCS-MPC), which employs a time-discrete model of the MC...... topology and a cost function to select the best switching state for the next sampling period. The proposed fault diagnosis method is realized by monitoring the load currents and judging the switching state to locate the faulty switch. Compared to the conventional modulation strategies such as carrier......-based modulation method, indirect space vector modulation and optimum Alesina-Venturini, the FCS-MPC has known and unchanged switching state in a sampling period. It is simpler to diagnose the exact location of the open switch in MC with FCS-MPC. To achieve better quality of the output current under single open...

  7. Quaternary Fault Lines

    Data.gov (United States)

    Department of Homeland Security — This data set contains locations and information on faults and associated folds in the United States that are believed to be sources of M>6 earthquakes during the...

  8. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    Science.gov (United States)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  9. Mechanical Fault Diagnosis Using Color Image Recognition of Vibration Spectrogram Based on Quaternion Invariable Moment

    Directory of Open Access Journals (Sweden)

    Liang Hua

    2015-01-01

    Full Text Available Automatic extraction of time-frequency spectral image of mechanical faults can be achieved and faults can be identified consequently when rotating machinery spectral image processing technology is applied to fault diagnosis, which is an advantage. Acquired mechanical vibration signals can be converted into color time-frequency spectrum images by the processing of pseudo Wigner-Ville distribution. Then a feature extraction method based on quaternion invariant moment was proposed, combining image processing technology and multiweight neural network technology. The paper adopted quaternion invariant moment feature extraction method and gray level-gradient cooccurrence matrix feature extraction method and combined them with geometric learning algorithm and probabilistic neural network algorithm, respectively, and compared the recognition rates of rolling bearing faults. The experimental results show that the recognition rates of quaternion invariant moment are higher than gray level-gradient cooccurrence matrix in the same recognition method. The recognition rates of geometric learning algorithm are higher than probabilistic neural network algorithm in the same feature extraction method. So the method based on quaternion invariant moment geometric learning and multiweight neural network is superior. What is more, this algorithm has preferable generalization performance under the condition of fewer samples, and it has practical value and acceptation on the field of fault diagnosis for rotating machinery as well.

  10. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    Science.gov (United States)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  11. A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.

    Science.gov (United States)

    Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang

    2018-03-28

    Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.

  12. Validation of algorithm used for location of electrodes in CT images

    International Nuclear Information System (INIS)

    Bustos, J; Graffigna, J P; Isoardi, R; Gómez, M E; Romo, R

    2013-01-01

    It has been implement a noninvasive technique to detect and delineate the focus of electric discharge in patients with mono-focal epilepsy. For the detection of these sources it has used electroencephalogram (EEG) with 128 electrodes cap. With EEG data and electrodes position, it is possible locate this focus on MR volumes. The technique locates the electrodes on CT volumes using image processing algorithms to obtain descriptors of electrodes, as centroid, which determines its position in space. Finally these points are transformed into the coordinate space of MR through a registration for a better understanding by the physician. Due to the medical implications of this technique is of utmost importance to validate the results of the detection of electrodes coordinates. For that, this paper present a comparison between the actual values measured physically (measures including electrode size and spatial location) and the values obtained in the processing of CT and MR images

  13. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  14. Scalable Fault-Tolerant Location Management Scheme for Mobile IP

    Directory of Open Access Journals (Sweden)

    JinHo Ahn

    2001-11-01

    Full Text Available As the number of mobile nodes registering with a network rapidly increases in Mobile IP, multiple mobility (home of foreign agents can be allocated to a network in order to improve performance and availability. Previous fault tolerant schemes (denoted by PRT schemes to mask failures of the mobility agents use passive replication techniques. However, they result in high failure-free latency during registration process if the number of mobility agents in the same network increases, and force each mobility agent to manage bindings of all the mobile nodes registering with its network. In this paper, we present a new fault-tolerant scheme (denoted by CML scheme using checkpointing and message logging techniques. The CML scheme achieves low failure-free latency even if the number of mobility agents in a network increases, and improves scalability to a large number of mobile nodes registering with each network compared with the PRT schemes. Additionally, the CML scheme allows each failed mobility agent to recover bindings of the mobile nodes registering with the mobility agent when it is repaired even if all the other mobility agents in the same network concurrently fail.

  15. Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.

    Science.gov (United States)

    1995-06-01

    DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth

  16. LTREE - a lisp-based algorithm for cutset generation using Boolean reduction

    International Nuclear Information System (INIS)

    Finnicum, D.J.; Rzasa, P.W.

    1985-01-01

    Fault tree analysis is an important tool for evaluating the safety of nuclear power plants. The basic objective of fault tree analysis is to determine the probability that an undesired event or combination of events will occur. Fault tree analysis involves four main steps: (1) specifying the undesired event or events; (2) constructing the fault tree which represents the ways in which the postulated event(s) could occur; (3) qualitative evaluation of the logic model to identify the minimal cutsets; and (4) quantitative evaluation of the logic model to determine the probability that the postulated event(s) will occur given the probability of occurrence for each individual fault. This paper describes a LISP-based algorithm for the qualitative evaluation of fault trees. Development of this algorithm is the first step in a project to apply expert systems technology to the automation of the fault tree analysis process. The first section of this paper provides an overview of LISP and its capabilities, the second section describes the LTREE algorithm and the third section discusses the on-going research areas

  17. Using marine magnetic survey data to identify a gold ore-controlling fault: a case study in Sanshandao fault, eastern China

    Science.gov (United States)

    Yan, Jiayong; Wang, Zhihui; Wang, Jinhui; Song, Jianhua

    2018-06-01

    The Jiaodong Peninsula has the greatest concentration of gold ore in China and is characterized by altered tectonite-type gold ore deposits. This type of gold deposit is mainly formed in fracture zones and is strictly controlled by faults. Three major ore-controlling faults occur in the Jiaodong Peninsula—the Jiaojia, Zhaoping and Sanshandao faults; the former two are located on land and the latter is located near Sanshandao and its adjacent offshore area. The discovery of the world’s largest marine gold deposit in northeastern Sanshandao indicates that the shallow offshore area has great potential for gold prospecting. However, as two ends of the Sanshandao fault extend to the Bohai Sea, conventional geological survey methods cannot determine the distribution of the fault and this is constraining the discovery of new gold deposits. To explore the southwestward extension of the Sanshandao fault, we performed a 1:25 000 scale marine magnetic survey in this region and obtained high-quality magnetic survey data covering 170 km2. Multi-scale edge detection and three-dimensional inversion of magnetic anomalies identify the characteristics of the southwestward extension of the Sanshandao fault and the three-dimensional distribution of the main lithologies, providing significant evidence for the deployment of marine gold deposit prospecting in the southern segment of the Sanshandao fault. Moreover, three other faults were identified in the study area and faults F2 and F4 are inferred as ore-controlling faults: there may exist other altered tectonite-type gold ore deposits along these two faults.

  18. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    Science.gov (United States)

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  19. Fault-tolerant Sensor Fusion for Marine Navigation

    DEFF Research Database (Denmark)

    Blanke, Mogens

    2006-01-01

    Reliability of navigation data are critical for steering and manoeuvring control, and in particular so at high speed or in critical phases of a mission. Should faults occur, faulty instruments need be autonomously isolated and faulty information discarded. This paper designs a navigation solution...... where essential navigation information is provided even with multiple faults in instrumentation. The paper proposes a provable correct implementation through auto-generated state-event logics in a supervisory part of the algorithms. Test results from naval vessels document the performance and shows...... events where the fault-tolerant sensor fusion provided uninterrupted navigation data despite temporal instrument defects...

  20. Fault Diagnosis of Complex Industrial Process Using KICA and Sparse SVM

    Directory of Open Access Journals (Sweden)

    Jie Xu

    2013-01-01

    Full Text Available New approaches are proposed for complex industrial process monitoring and fault diagnosis based on kernel independent component analysis (KICA and sparse support vector machine (SVM. The KICA method is a two-phase algorithm: whitened kernel principal component analysis (KPCA. The data are firstly mapped into high-dimensional feature subspace. Then, the ICA algorithm seeks the projection directions in the KPCA whitened space. Performance monitoring is implemented through constructing the statistical index and control limit in the feature space. If the statistical indexes exceed the predefined control limit, a fault may have occurred. Then, the nonlinear score vectors are calculated and fed into the sparse SVM to identify the faults. The proposed method is applied to the simulation of Tennessee Eastman (TE chemical process. The simulation results show that the proposed method can identify various types of faults accurately and rapidly.

  1. A Novel Data Hierarchical Fusion Method for Gas Turbine Engine Performance Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Feng Lu

    2016-10-01

    Full Text Available Gas path fault diagnosis involves the effective utilization of condition-based sensor signals along engine gas path to accurately identify engine performance failure. The rapid development of information processing technology has led to the use of multiple-source information fusion for fault diagnostics. Numerous efforts have been paid to develop data-based fusion methods, such as neural networks fusion, while little research has focused on fusion architecture or the fusion of different method kinds. In this paper, a data hierarchical fusion using improved weighted Dempster–Shaffer evidence theory (WDS is proposed, and the integration of data-based and model-based methods is presented for engine gas-path fault diagnosis. For the purpose of simplifying learning machine typology, a recursive reduced kernel based extreme learning machine (RR-KELM is developed to produce the fault probability, which is considered as the data-based evidence. Meanwhile, the model-based evidence is achieved using particle filter-fuzzy logic algorithm (PF-FL by engine health estimation and component fault location in feature level. The outputs of two evidences are integrated using WDS evidence theory in decision level to reach a final recognition decision of gas-path fault pattern. The characteristics and advantages of two evidences are analyzed and used as guidelines for data hierarchical fusion framework. Our goal is that the proposed methodology provides much better performance of gas-path fault diagnosis compared to solely relying on data-based or model-based method. The hierarchical fusion framework is evaluated in terms to fault diagnosis accuracy and robustness through a case study involving fault mode dataset of a turbofan engine that is generated by the general gas turbine simulation. These applications confirm the effectiveness and usefulness of the proposed approach.

  2. An algorithm for leak locating using coupled vibration of pipe-water

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin

    2004-01-01

    Leak noise is a good source to identify the exact location of a leak point of underground water pipelines. Water leak generates broadband sound from a leak location and this sound propagation due to leak in water pipelines is not a non-dispersive wave any more because of the surrounding pipes and soil. However, the necessity of long-range detection of this leak location makes to identify low-frequency acoustic waves rather than high frequency ones. Acoustic wave propagation coupled with surrounding boundaries including cast iron pipes is theoretically analyzed and the wave velocity was confirmed with experiment. The leak locations were identified both by the Acoustic Emission (AE) method and the cross-correlation method. In a short-range distance, both the AE method and cross-correlation method are effective to detect leak position. However, the detection for a long-range distance required a lower frequency range accelerometers only because higher frequency waves were attenuated very quickly with the increase of propagation paths. Two algorithms for the cross-correlation function were suggested, and a long-range detection has been achieved at real underground water pipelines longer than 300m

  3. A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol

    Science.gov (United States)

    Malekpour, Mahyar R.

    2015-01-01

    This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.

  4. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    Science.gov (United States)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  5. An improved contour symmetry axes extraction algorithm and its application in the location of picking points of apples

    Energy Technology Data Exchange (ETDEWEB)

    Wang, D.; Song, H.; Yu, X.; Zhang, W.; Qu, W.; Xu, Y.

    2015-07-01

    The key problem for picking robots is to locate the picking points of fruit. A method based on the moment of inertia and symmetry of apples is proposed in this paper to locate the picking points of apples. Image pre-processing procedures, which are crucial to improving the accuracy of the location, were carried out to remove noise and smooth the edges of apples. The moment of inertia method has the disadvantage of high computational complexity, which should be solved, so convex hull was used to improve this problem. To verify the validity of this algorithm, a test was conducted using four types of apple images containing 107 apple targets. These images were single and unblocked apple images, single and blocked apple images, images containing adjacent apples, and apples in panoramas. The root mean square error values of these four types of apple images were 6.3, 15.0, 21.6 and 18.4, respectively, and the average location errors were 4.9°, 10.2°, 16.3° and 13.8°, respectively. Furthermore, the improved algorithm was effective in terms of average runtime, with 3.7 ms and 9.2 ms for single and unblocked and single and blocked apple images, respectively. For the other two types of apple images, the runtime was determined by the number of apples and blocked apples contained in the images. The results showed that the improved algorithm could extract symmetry axes and locate the picking points of apples more efficiently. In conclusion, the improved algorithm is feasible for extracting symmetry axes and locating the picking points of apples. (Author)

  6. Location-Aware Mobile Learning of Spatial Algorithms

    Science.gov (United States)

    Karavirta, Ville

    2013-01-01

    Learning an algorithm--a systematic sequence of operations for solving a problem with given input--is often difficult for students due to the abstract nature of the algorithms and the data they process. To help students understand the behavior of algorithms, a subfield in computing education research has focused on algorithm…

  7. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  8. Verification of Small Hole Theory for Application to Wire Chaffing Resulting in Shield Faults

    Science.gov (United States)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2011-01-01

    Our work is focused upon developing methods for wire chafe fault detection through the use of reflectometry to assess shield integrity. When shielded electrical aircraft wiring first begins to chafe typically the resulting evidence is small hole(s) in the shielding. We are focused upon developing algorithms and the signal processing necessary to first detect these small holes prior to incurring damage to the inner conductors. Our approach has been to develop a first principles physics model combined with probabilistic inference, and to verify this model with laboratory experiments as well as through simulation. Previously we have presented the electromagnetic small-hole theory and how it might be applied to coaxial cable. In this presentation, we present our efforts to verify this theoretical approach with high-fidelity electromagnetic simulations (COMSOL). Laboratory observations are used to parameterize the computationally efficient theoretical model with probabilistic inference resulting in quantification of hole size and location. Our efforts in characterizing faults in coaxial cable are subsequently leading to fault detection in shielded twisted pair as well as analysis of intermittent faulty connectors using similar techniques.

  9. A Two-Stage Algorithm for the Closed-Loop Location-Inventory Problem Model Considering Returns in E-Commerce

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2014-01-01

    Full Text Available Facility location and inventory control are critical and highly related problems in the design of logistics system for e-commerce. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Focusing on the existing problem in e-commerce logistics system, we formulate a closed-loop location-inventory problem model considering returned merchandise to minimize the total cost which is produced in both forward and reverse logistics networks. To solve this nonlinear mixed programming model, an effective two-stage heuristic algorithm named LRCAC is designed by combining Lagrangian relaxation with ant colony algorithm (AC. Results of numerical examples show that LRCAC outperforms ant colony algorithm (AC on optimal solution and computing stability. The proposed model is able to help managers make the right decisions under e-commerce environment.

  10. A Novel Hierarchical Model to Locate Health Care Facilities with Fuzzy Demand Solved by Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Mehdi Alinaghian

    2014-08-01

    Full Text Available In the field of health losses resulting from failure to establish the facilities in a suitable location and the required number, beyond the cost and quality of service will result in an increase in mortality and the spread of diseases. So the facility location models have special importance in this area. In this paper, a successively inclusive hierarchical model for location of health centers in term of the transfer of patients from a lower level to a higher level of health centers has been developed. Since determination the exact number of demand for health care in the future is difficult and in order to make the model close to the real conditions of demand uncertainty, a fuzzy programming model based on credibility theory is considered. To evaluate the proposed model, several numerical examples are solved in small size. In order to solve large scale problems, a meta-heuristic algorithm based on harmony search algorithm was developed in conjunction with the GAMS software which indicants the performance of the proposed algorithm.

  11. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  12. Transient fault tolerant control for vehicle brake-by-wire systems

    International Nuclear Information System (INIS)

    Huang, Shuang; Zhou, Chunjie; Yang, Lili; Qin, Yuanqing; Huang, Xiongfeng; Hu, Bowen

    2016-01-01

    Brake-by-wire (BBW) systems that have no mechanical linkage between the brake pedal and the brake mechanism are expected to improve vehicle safety through better braking capability. However, transient faults in BBW systems can cause dangerous driving situations. Most existing research in this area focuses on the brake control mechanism, but very few studies try to solve the problem associated with transient fault propagation and evolution in the brake control system hierarchy. In this paper, a hierarchical transient fault tolerant scheme with embedded intelligence and resilient coordination for BBW system is proposed based on the analysis of transient fault propagation characteristics. In this scheme, most transient faults are tackled rapidly by a signature-based detection method at the node level, and the remaining transient faults, which cannot be detected directly at the node level and could degrade the system performance through fault propagation and evolution, are detected and recovered through function and structure models at the system level. To jointly accommodate these BBW transient faults at the system level, a sliding mode control algorithm and a task reallocation strategy are designed. A simulation platform based on Architecture Analysis and Design Language (AADL) is established to evaluate the task reallocation strategy, and a hardware-in-the-loop simulation is carried out to validate the proposed scheme systematically. Experimental results show the effectiveness of this new approach to BBW systems. - Highlights: • We propose a hierarchical transient fault tolerant scheme for BBW systems. • A sliding mode algorithm and a task strategy are designed to tackle transient fault. • The effectiveness of the scheme is verified in both simulation and HIL environments.

  13. Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2014-01-01

    Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.

  14. Solving fault diagnosis problems linear synthesis techniques

    CERN Document Server

    Varga, Andreas

    2017-01-01

    This book addresses fault detection and isolation topics from a computational perspective. Unlike most existing literature, it bridges the gap between the existing well-developed theoretical results and the realm of reliable computational synthesis procedures. The model-based approach to fault detection and diagnosis has been the subject of ongoing research for the past few decades. While the theoretical aspects of fault diagnosis on the basis of linear models are well understood, most of the computational methods proposed for the synthesis of fault detection and isolation filters are not satisfactory from a numerical standpoint. Several features make this book unique in the fault detection literature: Solution of standard synthesis problems in the most general setting, for both continuous- and discrete-time systems, regardless of whether they are proper or not; consequently, the proposed synthesis procedures can solve a specific problem whenever a solution exists Emphasis on the best numerical algorithms to ...

  15. Identification and location of catenary insulator in complex background based on machine vision

    Science.gov (United States)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  16. Modeling and Fault Diagnosis of Interturn Short Circuit for Five-Phase Permanent Magnet Synchronous Motor

    Directory of Open Access Journals (Sweden)

    Jian-wei Yang

    2015-01-01

    Full Text Available Taking advantage of the high reliability, multiphase permanent magnet synchronous motors (PMSMs, such as five-phase PMSM and six-phase PMSM, are widely used in fault-tolerant control applications. And one of the important fault-tolerant control problems is fault diagnosis. In most existing literatures, the fault diagnosis problem focuses on the three-phase PMSM. In this paper, compared to the most existing fault diagnosis approaches, a fault diagnosis method for Interturn short circuit (ITSC fault of five-phase PMSM based on the trust region algorithm is presented. This paper has two contributions. (1 Analyzing the physical parameters of the motor, such as resistances and inductances, a novel mathematic model for ITSC fault of five-phase PMSM is established. (2 Introducing an object function related to the Interturn short circuit ratio, the fault parameters identification problem is reformulated as the extreme seeking problem. A trust region algorithm based parameter estimation method is proposed for tracking the actual Interturn short circuit ratio. The simulation and experimental results have validated the effectiveness of the proposed parameter estimation method.

  17. A Novel Arc Fault Detector for Early Detection of Electrical Fires.

    Science.gov (United States)

    Yang, Kai; Zhang, Rencheng; Yang, Jianhong; Liu, Canhua; Chen, Shouhong; Zhang, Fujiang

    2016-04-09

    Arc faults can produce very high temperatures and can easily ignite combustible materials; thus, they represent one of the most important causes of electrical fires. The application of arc fault detection, as an emerging early fire detection technology, is required by the National Electrical Code to reduce the occurrence of electrical fires. However, the concealment, randomness and diversity of arc faults make them difficult to detect. To improve the accuracy of arc fault detection, a novel arc fault detector (AFD) is developed in this study. First, an experimental arc fault platform is built to study electrical fires. A high-frequency transducer and a current transducer are used to measure typical load signals of arc faults and normal states. After the common features of these signals are studied, high-frequency energy and current variations are extracted as an input eigenvector for use by an arc fault detection algorithm. Then, the detection algorithm based on a weighted least squares support vector machine is designed and successfully applied in a microprocessor. Finally, an AFD is developed. The test results show that the AFD can detect arc faults in a timely manner and interrupt the circuit power supply before electrical fires can occur. The AFD is not influenced by cross talk or transient processes, and the detection accuracy is very high. Hence, the AFD can be installed in low-voltage circuits to monitor circuit states in real-time to facilitate the early detection of electrical fires.

  18. A Novel Arc Fault Detector for Early Detection of Electrical Fires

    Science.gov (United States)

    Yang, Kai; Zhang, Rencheng; Yang, Jianhong; Liu, Canhua; Chen, Shouhong; Zhang, Fujiang

    2016-01-01

    Arc faults can produce very high temperatures and can easily ignite combustible materials; thus, they represent one of the most important causes of electrical fires. The application of arc fault detection, as an emerging early fire detection technology, is required by the National Electrical Code to reduce the occurrence of electrical fires. However, the concealment, randomness and diversity of arc faults make them difficult to detect. To improve the accuracy of arc fault detection, a novel arc fault detector (AFD) is developed in this study. First, an experimental arc fault platform is built to study electrical fires. A high-frequency transducer and a current transducer are used to measure typical load signals of arc faults and normal states. After the common features of these signals are studied, high-frequency energy and current variations are extracted as an input eigenvector for use by an arc fault detection algorithm. Then, the detection algorithm based on a weighted least squares support vector machine is designed and successfully applied in a microprocessor. Finally, an AFD is developed. The test results show that the AFD can detect arc faults in a timely manner and interrupt the circuit power supply before electrical fires can occur. The AFD is not influenced by cross talk or transient processes, and the detection accuracy is very high. Hence, the AFD can be installed in low-voltage circuits to monitor circuit states in real-time to facilitate the early detection of electrical fires. PMID:27070618

  19. Study of expert system of fault diagnosis for nuclear power plant

    International Nuclear Information System (INIS)

    Chen Zhihui; Xia Hong; Liu Miao

    2005-01-01

    Based on the fault features of Nuclear Power Plant, the ES (expert system) of fault diagnosis has been programmed. The knowledge in the ES adopts the production systems, which can express the certain and uncertain knowledge. For certain knowledge, the simple reasoning mechanism of prepositional logic is adopted. For the uncertain knowledge, CF (certain factor) is used to express the uncertain, thus to set up the reasoning mechanism. In order to solve the 'bottleneck' problem for knowledge acquisition, rough set theory is incorporated into the fault diagnose system and the reduction algorithm based on the discernibility matrix is improved. In the improved algorithm, the measure of attribute importance first calculate the attribute which have the same value in the same decision-sort, then calculate the degrees of attribute in the discernibility matrix. Several different faults have been diagnosed on some emulator with this expert system. (authors)

  20. Sensor fault diagnosis of aero-engine based on divided flight status

    Science.gov (United States)

    Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu

    2017-11-01

    Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.

  1. Sensor fault diagnosis of aero-engine based on divided flight status.

    Science.gov (United States)

    Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu

    2017-11-01

    Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.

  2. FAULT DETECTION AND LOCALIZATION IN MOTORCYCLES BASED ON THE CHAIN CODE OF PSEUDOSPECTRA AND ACOUSTIC SIGNALS

    Directory of Open Access Journals (Sweden)

    B. S. Anami

    2013-06-01

    Full Text Available Vehicles produce sound signals with varying temporal and spectral properties under different working conditions. These sounds are indicative of the condition of the engine. Fault diagnosis is a significantly difficult task in geographically remote places where expertise is scarce. Automated fault diagnosis can assist riders to assess the health condition of their vehicles. This paper presents a method for fault detection and location in motorcycles based on the chain code of the pseudospectra and Mel-frequency cepstral coefficient (MFCC features of acoustic signals. The work comprises two stages: fault detection and fault location. The fault detection stage uses the chain code of the pseudospectrum as a feature vector. If the motorcycle is identified as faulty, the MFCCs of the same sample are computed and used as features for fault location. Both stages employ dynamic time warping for the classification of faults. Five types of faults in motorcycles are considered in this work. Observed classification rates are over 90% for the fault detection stage and over 94% for the fault location stage. The work identifies other interesting applications in the development of acoustic fingerprints for fault diagnosis of machinery, tuning of musical instruments, medical diagnosis, etc.

  3. Mechanical fault diagnostics for induction motor with variable speed drives using Adaptive Neuro-fuzzy Inference System

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Z. [Department of Electrical & amp; Computer Engineering, Queen' s University, Kingston, Ont. (Canada K7L 3N6); Sadeghian, A. [Department of Computer Science, Ryerson University, Toronto, Ont. (Canada M5B 2K3); Wu, B. [Department of Electrical & amp; Computer Engineering, Ryerson University, Toronto, Ont. (Canada M5B 2K3)

    2006-06-15

    A novel online diagnostic algorithm for mechanical faults of electrical machines with variable speed drive systems is presented in this paper. Using Wavelet Packet Decomposition (WPD), a set of feature coefficients, represented with different frequency resolutions, related to the mechanical faults is extracted from the stator current of the induction motors operating over a wide range of speeds. A new integrated diagnostic system for electrical machine mechanical faults is then proposed using multiple Adaptive Neuro-fuzzy Inference Systems (ANFIS). This paper shows that using multiple ANFIS units significantly reduces the scale and complexity of the system and speeds up the training of the network. The diagnostic algorithm is validated on a three-phase induction motor drive system, and it is proven to be capable of detecting rotor bar breakage and air gap eccentricity faults with high accuracy. The algorithm is applicable to a variety of industrial applications where either continuous on-line monitoring or off-line fault diagnostics is required. (author)

  4. Fault geometry, rupture dynamics and ground motion from potential earthquakes on the North Anatolian Fault under the Sea of Marmara

    KAUST Repository

    Oglesby, David D.

    2012-03-01

    Using the 3-D finite-element method, we develop dynamic spontaneous rupture models of earthquakes on the North Anatolian Fault system in the Sea of Marmara, Turkey, considering the geometrical complexity of the fault system in this region. We find that the earthquake size, rupture propagation pattern and ground motion all strongly depend on the interplay between the initial (static) regional pre-stress field and the dynamic stress field radiated by the propagating rupture. By testing several nucleation locations, we observe that those far from an oblique normal fault stepover segment (near Istanbul) lead to large through-going rupture on the entire fault system, whereas nucleation locations closer to the stepover segment tend to produce ruptures that die out in the stepover. However, this pattern can change drastically with only a 10° rotation of the regional stress field. Our simulations also reveal that while dynamic unclamping near fault bends can produce a new mode of supershear rupture propagation, this unclamping has a much smaller effect on the speed of the peak in slip velocity along the fault. Finally, we find that the complex fault geometry leads to a very complex and asymmetric pattern of near-fault ground motion, including greatly amplified ground motion on the insides of fault bends. The ground-motion pattern can change significantly with different hypocentres, even beyond the typical effects of directivity. The results of this study may have implications for seismic hazard in this region, for the dynamics and ground motion of geometrically complex faults, and for the interpretation of kinematic inverse rupture models.

  5. Fault geometry, rupture dynamics and ground motion from potential earthquakes on the North Anatolian Fault under the Sea of Marmara

    KAUST Repository

    Oglesby, David D.; Mai, Paul Martin

    2012-01-01

    Using the 3-D finite-element method, we develop dynamic spontaneous rupture models of earthquakes on the North Anatolian Fault system in the Sea of Marmara, Turkey, considering the geometrical complexity of the fault system in this region. We find that the earthquake size, rupture propagation pattern and ground motion all strongly depend on the interplay between the initial (static) regional pre-stress field and the dynamic stress field radiated by the propagating rupture. By testing several nucleation locations, we observe that those far from an oblique normal fault stepover segment (near Istanbul) lead to large through-going rupture on the entire fault system, whereas nucleation locations closer to the stepover segment tend to produce ruptures that die out in the stepover. However, this pattern can change drastically with only a 10° rotation of the regional stress field. Our simulations also reveal that while dynamic unclamping near fault bends can produce a new mode of supershear rupture propagation, this unclamping has a much smaller effect on the speed of the peak in slip velocity along the fault. Finally, we find that the complex fault geometry leads to a very complex and asymmetric pattern of near-fault ground motion, including greatly amplified ground motion on the insides of fault bends. The ground-motion pattern can change significantly with different hypocentres, even beyond the typical effects of directivity. The results of this study may have implications for seismic hazard in this region, for the dynamics and ground motion of geometrically complex faults, and for the interpretation of kinematic inverse rupture models.

  6. Seismicity and Tectonics of the West Kaibab Fault Zone, AZ

    Science.gov (United States)

    Wilgus, J. T.; Brumbaugh, D. S.

    2014-12-01

    The West Kaibab Fault Zone (WKFZ) is the westernmost bounding structure of the Kaibab Plateau of northern Arizona. The WKFZ is a branching complex of high angle, normal faults downthrown to the west. There are three main faults within the WKFZ, the Big Springs fault with a maximum of 165 m offset, the Muav fault with 350 m of displacement, and the North Road fault having a maximum throw of approximately 90 m. Mapping of geologically recent surface deposits at or crossing the fault contacts indicates that the faults are likely Quaternary with the most recent offsets occurring one of the most seismically active areas in Arizona and lies within the Northern Arizona Seismic Belt (NASB), which stretches across northern Arizona trending NW-SE. The data set for this study includes 156 well documented events with the largest being a M5.75 in 1959 and including a swarm of seven earthquakes in 2012. The seismic data set (1934-2014) reveals that seismic activity clusters in two regions within the study area, the Fredonia cluster located in the NW corner of the study area and the Kaibab cluster located in the south central portion of the study area. The fault plane solutions to date indicate NE-SW to EW extension is occurring in the study area. Source relationships between earthquakes and faults within the WKFZ have not previously been studied in detail. The goal of this study is to use the seismic data set, the available data on faults, and the regional physiography to search for source relationships for the seismicity. Analysis includes source parameters of the earthquake data (location, depth, and fault plane solutions), and comparison of this output to the known faults and areal physiographic framework to indicate any active faults of the WKFZ, or suggested active unmapped faults. This research contributes to a better understanding of the present nature of the WKFZ and the NASB as well.

  7. Mapping the Qademah Fault with Traveltime, Surface-wave, and Resistivity Tomograms

    KAUST Repository

    Hanafy, Sherif M.

    2015-08-19

    Traveltime, surface-wave, and resistivity tomograms are used to track the buried Qademah fault located near King Abdullah Economic City (KAEC), Saudi Arabia. The fault location is confirmed by the 1) resistivity tomogram obtained from an electrical resistivity experiment, 2) the refraction traveltime tomogram, 3) the reflection image computed from 2D seismic data set recorded at the northern part of the fault, and 4) the surface-wave tomogram.

  8. Mapping the Qademah Fault with Traveltime, Surface-wave, and Resistivity Tomograms

    KAUST Repository

    Hanafy, Sherif M.

    2015-01-01

    Traveltime, surface-wave, and resistivity tomograms are used to track the buried Qademah fault located near King Abdullah Economic City (KAEC), Saudi Arabia. The fault location is confirmed by the 1) resistivity tomogram obtained from an electrical resistivity experiment, 2) the refraction traveltime tomogram, 3) the reflection image computed from 2D seismic data set recorded at the northern part of the fault, and 4) the surface-wave tomogram.

  9. A Novel Fault Identification Using WAMS/PMU

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2012-05-01

    Full Text Available The important premise of the novel adaptive backup protection based on wide area information is to identify the fault in a real-time and on-line way. In this paper, the principal components analysis theory is introduced into the field of fault detection to locate precisely the fault by mean of the voltage and current phasor data from the PMUs. Massive simulation experiments have fully proven that the fault identification can be performed successfully by principal component analysis and calculation. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Under the influence of noise, the results are still accurate and reliable. So, the principal components fault identification has strong anti-interference ability and great redundancy.

  10. SmartFix: Indoor Locating Optimization Algorithm for Energy-Constrained Wearable Devices

    Directory of Open Access Journals (Sweden)

    Xiaoliang Wang

    2017-01-01

    Full Text Available Indoor localization technology based on Wi-Fi has long been a hot research topic in the past decade. Despite numerous solutions, new challenges have arisen along with the trend of smart home and wearable computing. For example, power efficiency needs to be significantly improved for resource-constrained wearable devices, such as smart watch and wristband. For a Wi-Fi-based locating system, most of the energy consumption can be attributed to real-time radio scan; however, simply reducing radio data collection will cause a serious loss of locating accuracy because of unstable Wi-Fi signals. In this paper, we present SmartFix, an optimization algorithm for indoor locating based on Wi-Fi RSS. SmartFix utilizes user motion features, extracts characteristic value from history trajectory, and corrects deviation caused by unstable Wi-Fi signals. We implemented a prototype of SmartFix both on Moto 360 2nd-generation Smartwatch and on HTC One Smartphone. We conducted experiments both in a large open area and in an office hall. Experiment results demonstrate that average locating error is less than 2 meters for more than 80% cases, and energy consumption is only 30% of Wi-Fi fingerprinting method under the same experiment circumstances.

  11. Ocean Economy and Fault Diagnosis of Electric Submersible Pump applied in Floating platform

    Directory of Open Access Journals (Sweden)

    Panlong Zhang

    2017-04-01

    Full Text Available Ocean economy plays a crucial role in the strengthening maritime safety industry and in the welfare of human beings. Electric Submersible Pumps (ESP have been widely used in floating platforms on the sea to provide oil for machines. However, the ESP fault may lead to ocean environment pollution, on the other hand, a timely fault diagnosis of ESP can improve the ocean economy. In order to meet the strict regulations of the ocean economy and environmental protection, the fault diagnosis of ESP system has become more and more popular in many countries. The vibration mechanical models of typical faults have been able to successfully diagnose the faults of ESP. And different types of sensors are used to monitor the vibration signal for the signal analysis and fault diagnosis in the ESP system. Meanwhile, physical sensors would increase the fault diagnosis challenge. Nowadays, the method of neural network for the fault diagnosis of ESP has been applied widely, which can diagnose the fault of an electric pump accurately based on the large database. To reduce the number of sensors and to avoid the large database, in this paper, algorithms are designed based on feature extraction to diagnose the fault of the ESP system. Simulation results show that the algorithms can achieve the prospective objectives superbly.

  12. Study on Fault Diagnosis of Rolling Bearing Based on Time-Frequency Generalized Dimension

    Directory of Open Access Journals (Sweden)

    Yu Yuan

    2015-01-01

    Full Text Available The condition monitoring technology and fault diagnosis technology of mechanical equipment played an important role in the modern engineering. Rolling bearing is the most common component of mechanical equipment which sustains and transfers the load. Therefore, fault diagnosis of rolling bearings has great significance. Fractal theory provides an effective method to describe the complexity and irregularity of the vibration signals of rolling bearings. In this paper a novel multifractal fault diagnosis approach based on time-frequency domain signals was proposed. The method and numerical algorithm of Multi-fractal analysis in time-frequency domain were provided. According to grid type J and order parameter q in algorithm, the value range of J and the cut-off condition of q were optimized based on the effect on the dimension calculation. Simulation experiments demonstrated that the effective signal identification could be complete by multifractal method in time-frequency domain, which is related to the factors such as signal energy and distribution. And the further fault diagnosis experiments of bearings showed that the multifractal method in time-frequency domain can complete the fault diagnosis, such as the fault judgment and fault types. And the fault detection can be done in the early stage of fault. Therefore, the multifractal method in time-frequency domain used in fault diagnosis of bearing is a practicable method.

  13. Fault Detection and Isolation and Fault Tolerant Control of Wind Turbines Using Set-Valued Observers

    DEFF Research Database (Denmark)

    Casau, Pedro; Rosa, Paulo Andre Nobre; Tabatabaeipour, Seyed Mojtaba

    2012-01-01

    Research on wind turbine Operations & Maintenance (O&M) procedures is critical to the expansion of Wind Energy Conversion systems (WEC). In order to reduce O&M costs and increase the lifespan of the turbine, we study the application of Set-Valued Observers (SVO) to the problem of Fault Detection...... and Isolation (FDI) and Fault Tolerant Control (FTC) of wind turbines, by taking advantage of the recent advances in SVO theory for model invalidation. A simple wind turbine model is presented along with possible faulty scenarios. The FDI algorithm is built on top of the described model, taking into account...

  14. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-01

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  15. Thermal-hydraulic modeling of deaerator and fault detection and diagnosis of measurement sensor

    International Nuclear Information System (INIS)

    Lee, Jung Woon; Park, Jae Chang; Kim, Jung Taek; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In

    2003-05-01

    It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the deaerator and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm using Adaptive Estimator(AE) algorithm. The other is input-output model-based FDD algorithm using ART neural network. Extensive computer simulations for the real data obtained from Younggwang 3 and 4 FSAR are carried out to evaluate the performance in terms of speed and accuracy

  16. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    Science.gov (United States)

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  17. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2017-06-01

    Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  18. Identification of active fault using analysis of derivatives with vertical second based on gravity anomaly data (Case study: Seulimeum fault in Sumatera fault system)

    Science.gov (United States)

    Hududillah, Teuku Hafid; Simanjuntak, Andrean V. H.; Husni, Muhammad

    2017-07-01

    Gravity is a non-destructive geophysical technique that has numerous application in engineering and environmental field like locating a fault zone. The purpose of this study is to spot the Seulimeum fault system in Iejue, Aceh Besar (Indonesia) by using a gravity technique and correlate the result with geologic map and conjointly to grasp a trend pattern of fault system. An estimation of subsurface geological structure of Seulimeum fault has been done by using gravity field anomaly data. Gravity anomaly data which used in this study is from Topex that is processed up to Free Air Correction. The step in the Next data processing is applying Bouger correction and Terrin Correction to obtain complete Bouger anomaly that is topographically dependent. Subsurface modeling is done using the Gav2DC for windows software. The result showed a low residual gravity value at a north half compared to south a part of study space that indicated a pattern of fault zone. Gravity residual was successfully correlate with the geologic map that show the existence of the Seulimeum fault in this study space. The study of earthquake records can be used for differentiating the active and non active fault elements, this gives an indication that the delineated fault elements are active.

  19. Detection of arcing ground fault location on a distribution network connected PV system; Hikarihatsuden renkei haidensen ni okeru koko chiryaku kukan no kenshutsuho

    Energy Technology Data Exchange (ETDEWEB)

    Sato, M; Iwaya, K; Morooka, Y [Hachinohe Institute of Technology, Aomori (Japan)

    1996-10-27

    In the near future, it is supposed that a great number of small-scale distributed power sources, such as photovoltaic power generation for general houses, will be interconnected with the ungrounded neutral distribution system in Japan. When ground fault of commercial frequency once occurs, great damage is easily guessed. This paper discusses the effect of the ground fault on the ground phase current using a 6.6 kV high-voltage model system by considering the non-linear self-inductance in the line, and by considering the non-linear relation of arcing ground fault current frequency. In the present method, the remarkable difference of series resonance frequency determined by the inductance and earth capacity between the source side and load side is utilized for the detection of high-voltage arcing ground fault location. In this method, there are some cases in which the non-linear effect obtained by measuring the inductance of sound phase including the secondary winding of transformer can not be neglected. Especially, for the actual high-voltage system, it was shown that the frequency characteristics of transformer inductance for distribution should be theoretically derived in the frequency range between 2 kHz and 6 kHz. 2 refs., 5 figs., 1 tab.

  20. Data Fault Detection in Medical Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2015-03-01

    Full Text Available Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians’ diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren’t changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M. Its mechanism includes: (1 use of a dynamic-local outlier factor (D-LOF algorithm to identify outlying sensed data vectors; (2 use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3 the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M.

  1. A fault-tolerant software strategy for digital systems

    Science.gov (United States)

    Hitt, E. F.; Webb, J. J.

    1984-01-01

    Techniques developed for producing fault-tolerant software are described. Tolerance is required because of the impossibility of defining fault-free software. Faults are caused by humans and can appear anywhere in the software life cycle. Tolerance is effected through error detection, damage assessment, recovery, and fault treatment, followed by return of the system to service. Multiversion software comprises two or more versions of the software yielding solutions which are examined by a decision algorithm. Errors can also be detected by extrapolation from previous results or by the acceptability of results. Violations of timing specifications can reveal errors, or the system can roll back to an error-free state when a defect is detected. The software, when used in flight control systems, must not impinge on time-critical responses. Efforts are still needed to reduce the costs of developing the fault-tolerant systems.

  2. Tectonic tremor and LFEs on a reverse fault in Taiwan

    Science.gov (United States)

    Aguiar, Ana C.; Chao, Kevin; Beroza, Gregory C.

    2017-07-01

    We compare low-frequency earthquakes (LFEs) from triggered and ambient tremor under the southern Central Range, Taiwan. We apply the PageRank algorithm used by Aguiar and Beroza (2014) that exploits the repetitive nature of the LFEs to find repeating LFEs in both ambient and triggered tremor. We use these repeaters to create LFE templates and find that the templates created from both tremor types are very similar. To test their similarity, we use both interchangeably and find that most of both the ambient and triggered tremor match the LFE templates created from either data set, suggesting that LFEs for both events have a common origin. We locate the LFEs by using local earthquake P wave and S wave information and find that LFEs from triggered and ambient tremor locate to between 20 and 35 km on what we interpret as the deep extension of the Chaochou-Lishan Fault.

  3. HTCRL: A Range-Free Location Algorithm Based on Homothetic Triangle Cyclic Refinement in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dan Zhang

    2017-03-01

    Full Text Available Wireless sensor networks (WSN have become a significant technology in recent years. They can be widely used in many applications. WSNs consist of a large number of sensor nodes and each of them is energy-constrained and low-power dissipation. Most of the sensor nodes are tiny sensors with small memories and do not acquire their own locations. This means determining the locations of the unknown sensor nodes is one of the key issues in WSN. In this paper, an improved APIT algorithm HTCRL (Homothetic Triangle Cyclic Refinement Location is proposed, which is based on the principle of the homothetic triangle. It adopts perpendicular median surface cutting to narrow down target area in order to decrease the average localization error rate. It reduces the probability of misjudgment by adding the conditions of judgment. It can get a relatively high accuracy compared with the typical APIT algorithm without any additional hardware equipment or increasing the communication overhead.

  4. Analog fault diagnosis by inverse problem technique

    KAUST Repository

    Ahmed, Rania F.

    2011-12-01

    A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.

  5. A Weibull-based compositional approach for hierarchical dynamic fault trees

    International Nuclear Information System (INIS)

    Chiacchio, F.; Cacioppo, M.; D'Urso, D.; Manno, G.; Trapani, N.; Compagno, L.

    2013-01-01

    The solution of a dynamic fault tree (DFT) for the reliability assessment can be achieved using a wide variety of techniques. These techniques have a strong theoretical foundation as both the analytical and the simulation methods have been extensively developed. Nevertheless, they all present the same limits that appear with the increasing of the size of the fault trees (i.e., state space explosion, time-consuming simulations), compromising the resolution. We have tested the feasibility of a composition algorithm based on a Weibull distribution, addressed to the resolution of a general class of dynamic fault trees characterized by non-repairable basic events and generally distributed failure times. The proposed composition algorithm is used to generalize the traditional hierarchical technique that, as previous literature have extensively confirmed, is able to reduce the computational effort of a large DFT through the modularization of independent parts of the tree. The results of this study are achieved both through simulation and analytical techniques, thus confirming the capability to solve a quite general class of dynamic fault trees and overcome the limits of traditional techniques.

  6. Formal verification of algorithms for critical systems

    Science.gov (United States)

    Rushby, John M.; Von Henke, Friedrich

    1993-01-01

    We describe our experience with formal, machine-checked verification of algorithms for critical applications, concentrating on a Byzantine fault-tolerant algorithm for synchronizing the clocks in the replicated computers of a digital flight control system. First, we explain the problems encountered in unsynchronized systems and the necessity, and criticality, of fault-tolerant synchronization. We give an overview of one such algorithm, and of the arguments for its correctness. Next, we describe a verification of the algorithm that we performed using our EHDM system for formal specification and verification. We indicate the errors we found in the published analysis of the algorithm, and other benefits that we derived from the verification. Based on our experience, we derive some key requirements for a formal specification and verification system adequate to the task of verifying algorithms of the type considered. Finally, we summarize our conclusions regarding the benefits of formal verification in this domain, and the capabilities required of verification systems in order to realize those benefits.

  7. ASCS online fault detection and isolation based on an improved MPCA

    Science.gov (United States)

    Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan

    2014-09-01

    Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.

  8. A new digital ground-fault protection system for generator-transformer unit

    Energy Technology Data Exchange (ETDEWEB)

    Zielichowski, Mieczyslaw; Szlezak, Tomasz [Institute of Electrical Power Engineering, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50370 Wroclaw (Poland)

    2007-08-15

    Ground faults are one of most often reasons of damages in stator windings of large generators. Under certain conditions, as a result of ground-fault protection systems maloperation, ground faults convert into high-current faults, causing severe failures in power system. Numerous publications in renowned journals and magazines testify about ground-fault matter importance and problems reported by exploitators confirm opinions, that some issues concerning ground-fault protection of large generators have not been solved yet or have been solved insufficiently. In this paper a new conception of a digital ground-fault protection system for stator winding of large generator was proposed. The process of intermittent arc ground fault in stator winding has been briefly discussed and actual ground-fault voltage waveforms were presented. A new relaying algorithm, based on third harmonic voltage measurement was also drawn and the methods of its implementation and testing were described. (author)

  9. Semi-supervised weighted kernel clustering based on gravitational search for fault diagnosis.

    Science.gov (United States)

    Li, Chaoshun; Zhou, Jianzhong

    2014-09-01

    Supervised learning method, like support vector machine (SVM), has been widely applied in diagnosing known faults, however this kind of method fails to work correctly when new or unknown fault occurs. Traditional unsupervised kernel clustering can be used for unknown fault diagnosis, but it could not make use of the historical classification information to improve diagnosis accuracy. In this paper, a semi-supervised kernel clustering model is designed to diagnose known and unknown faults. At first, a novel semi-supervised weighted kernel clustering algorithm based on gravitational search (SWKC-GS) is proposed for clustering of dataset composed of labeled and unlabeled fault samples. The clustering model of SWKC-GS is defined based on wrong classification rate of labeled samples and fuzzy clustering index on the whole dataset. Gravitational search algorithm (GSA) is used to solve the clustering model, while centers of clusters, feature weights and parameter of kernel function are selected as optimization variables. And then, new fault samples are identified and diagnosed by calculating the weighted kernel distance between them and the fault cluster centers. If the fault samples are unknown, they will be added in historical dataset and the SWKC-GS is used to partition the mixed dataset and update the clustering results for diagnosing new fault. In experiments, the proposed method has been applied in fault diagnosis for rotatory bearing, while SWKC-GS has been compared not only with traditional clustering methods, but also with SVM and neural network, for known fault diagnosis. In addition, the proposed method has also been applied in unknown fault diagnosis. The results have shown effectiveness of the proposed method in achieving expected diagnosis accuracy for both known and unknown faults of rotatory bearing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    Science.gov (United States)

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  11. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT for Aquaculture

    Directory of Open Access Journals (Sweden)

    Yingyi Chen

    2017-01-01

    Full Text Available In the Internet of Things (IoT equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  12. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    Science.gov (United States)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  13. Assessment of faulting and seismic hazards at Yucca Mountain

    International Nuclear Information System (INIS)

    King, J.L.; Frazier, G.A.; Grant, T.A.

    1989-01-01

    Yucca Mountain is being evaluated for the nation's first high-level nuclear-waste repository. Local faults appear to be capable of moderate earthquakes at recurrence intervals of tens of thousands of years. The major issues identified for the preclosure phase (<100 yrs) are the location and seismic design of surface facilities for handling incoming waste. It is planned to address surface fault rupture by locating facilities where no discernible recent (<100,000 yrs) faulting has occurred and to base the ground motion design on hypothetical earthquakes, postulated on nearby faults, that represent 10,000 yrs of average cumulative displacement. The major tectonic issues identified for the postclosure phase (10,000 yrs) are volcanism (not addressed here) and potential changes to the hydrologic system resulting from a local faulting event which could trigger potential thermal, mechanical, and chemical interactions with the ground water. Extensive studies are planned for resolving these issues. 33 refs., 3 figs

  14. Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data.

    Science.gov (United States)

    Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong

    2018-02-05

    The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis.

  15. Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data

    Science.gov (United States)

    Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong

    2018-01-01

    The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730

  16. A set of particle locating algorithms not requiring face belonging to cell connectivity data

    Science.gov (United States)

    Sani, M.; Saidi, M. S.

    2009-10-01

    Existing efficient directed particle locating (host determination) algorithms rely on the face belonging to cell relationship (F2C) to find the next cell on the search path and the cell in which the target is located. Recently, finite volume methods have been devised which do not need F2C. Therefore, existing search algorithms are not directly applicable (unless F2C is included). F2C is a major memory burden in grid description. If the memory benefit from these finite volume methods are desirable new search algorithms should be devised. In this work two new algorithms (line of sight and closest cell) are proposed which do not need F2C. They are based on the structure of the sparse coefficient matrix involved (stored for example in the compressed row storage, CRS, format) to determine the next cell. Since F2C is not available, testing a cell for the presence of the target is not possible. Therefore, the proposed methods may wrongly mark a nearby cell as the host in some rare cases. The issue of importance of finding the correct host cell (not wrongly hitting its neighbor) is addressed. Quantitative measures are introduced to assess the efficiency of the methods and comparison is made for typical grid types used in computational fluid dynamics. In comparison, the closest cell method, having a lower computational cost than the family of line of sight and the existing efficient maximum dot product methods, gives a very good performance with tolerable and harmless wrong hits. If more accuracy is needed, the method of approximate line of sight then closest cell (LS-A-CC) is recommended.

  17. New development in relay protection for smart grid : new principles of fault distinction

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B.; Hao, Z. [Xi' an Jiaotong Univ., Xian (China); Klimek, A. [Powertech Labs Inc., Surrey, BC (Canada); Bo, Z. [Alstom Grid Automation (United Kingdom)

    2010-07-01

    China is planning to integrate a smart grid into a proposed 750/1000 kV transmission network. However, the performance of the protection relay must be assured in order to have this kind of transmitting capacity. There are many protection strategies that address the many demands of a smart grid, including ultra-high-speed transient-based fault discrimination; new co-ordination principles of main and back-up protection to suit the diversification of the power network; optimal co-ordination between relay protection; and autoreclosure to enhance robustness of the power network. There are also new developments in protection early warning and tripping functions in the protection concepts based on wide area information. This paper presented the principles, algorithms and techniques of single-ended, transient-based and ultra-high-speed protection for extra-high voltage (EHV) transmission lines, buses, DC transmission lines and faulted line selection for non-solid earthed networks. Test results have verified that the proposed methods can determine fault characteristics with ultra-high-speed (5 ms), and that the new principles of fault discrimination can satisfy the demand of EHV systems within a smart grid. High speed Digital Signal Processor (DSP) embedded system techniques combined with optical sensors provide the ability to record and compute detailed fault transients. This technology makes it possible to implement protection principles based on transient information. Due to the inconsistent nature of the wave impedance for various power apparatuses and the reflection and refraction characteristics of their interconnection points, the fault transients contain abundant information about the fault location and type. It is possible to construct ultra high speed and more sensitive AC, DC and busbar main protection through the correct analysis of such information. 23 refs., 6 figs.

  18. Model based fault diagnosis in a centrifugal pump application using structural analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  19. Model Based Fault Diagnosis in a Centrifugal Pump Application using Structural Analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  20. Fault structure and kinematics of the Long Valley Caldera region, California, revealed by high-accuracy earthquake hypocenters and focal mechanism stress inversions

    Science.gov (United States)

    Prejean, Stephanie; Ellsworth, William L.; Zoback, Mark; Waldhauser, Felix

    2002-01-01

    We have determined high-resolution hypocenters for 45,000+ earthquakes that occurred between 1980 and 2000 in the Long Valley caldera area using a double-difference earthquake location algorithm and routinely determined arrival times. The locations reveal numerous discrete fault planes in the southern caldera and adjacent Sierra Nevada block (SNB). Intracaldera faults include a series of east/west-striking right-lateral strike-slip faults beneath the caldera's south moat and a series of more northerly striking strike-slip/normal faults beneath the caldera's resurgent dome. Seismicity in the SNB south of the caldera is confined to a crustal block bounded on the west by an east-dipping oblique normal fault and on the east by the Hilton Creek fault. Two NE-striking left-lateral strike-slip faults are responsible for most seismicity within this block. To understand better the stresses driving seismicity, we performed stress inversions using focal mechanisms with 50 or more first motions. This analysis reveals that the least principal stress direction systematically rotates across the studied region, from NE to SW in the caldera's south moat to WNW-ESE in Round Valley, 25 km to the SE. Because WNW-ESE extension is characteristic of the western boundary of the Basin and Range province, caldera area stresses appear to be locally perturbed. This stress perturbation does not seem to result from magma chamber inflation but may be related to the significant (???20 km) left step in the locus of extension along the Sierra Nevada/Basin and Range province boundary. This implies that regional-scale tectonic processes are driving seismic deformation in the Long Valley caldera.

  1. CRISP. Fault detection, analysis and diagnostics in high-DG distribution systems

    International Nuclear Information System (INIS)

    Fontela, M.; Bacha, S.; Hadsjaid, N.; Andrieu, C.; Raison, B.; Penkov, D.

    2004-04-01

    The fault in the electrotechnical meaning is defined in the document. The main part of faults in overhead lines are non permanent faults, what entails the network operator to maintain the existing techniques to clear as fast as possible these faults. When a permanent fault occurs the operator has to detect and to limit the risks as soon as possible. Different axes are followed: limitation of the fault current, clearing the faulted feeder, locating the fault by test and try under possible fault condition. So the fault detection, fault clearing and fault localization are important functions of an EPS (electric power systems) to allow secure and safe operation of the system. The function may be improved by means of a better use of ICT components in the future sharing conveniently the intelligence needed near the distributed devices and a defined centralized intelligence. This improvement becomes necessary in distribution EPS with a high introduction of DR (distributed resources). The transmission and sub-transmission protection systems are already installed in order to manage power flow in all directions, so the DR issue is less critical for this part of the power system in term of fault clearing and diagnosis. Nevertheless the massive introduction of RES involves another constraints to the transmission system which are the bottlenecks caused by important local and fast installed production as wind power plants. Dealing with the distribution power system, and facing a permanent fault, two main actions must be achieved: identify the faulted elementary EPS area quickly and allow the field crew to locate and to repair the fault as soon as possible. The introduction of DR in distribution EPS involves some changes in fault location methods or equipment. The different existing neutral grounding systems make it difficult the achievement of a general method relevant for any distribution EPS in Europe. Some solutions are studied in the CRISP project in order to improve the

  2. High stresses stored in fault zones: example of the Nojima fault (Japan)

    Science.gov (United States)

    Boullier, Anne-Marie; Robach, Odile; Ildefonse, Benoît; Barou, Fabrice; Mainprice, David; Ohtani, Tomoyuki; Fujimoto, Koichiro

    2018-04-01

    During the last decade pulverized rocks have been described on outcrops along large active faults and attributed to damage related to a propagating seismic rupture front. Questions remain concerning the maximal lateral distance from the fault plane and maximal depth for dynamic damage to be imprinted in rocks. In order to document these questions, a representative core sample of granodiorite located 51.3 m from the Nojima fault (Japan) that was drilled after the Hyogo-ken Nanbu (Kobe) earthquake is studied by using electron backscattered diffraction (EBSD) and high-resolution X-ray Laue microdiffraction. Although located outside of the Nojima damage fault zone and macroscopically undeformed, the sample shows pervasive microfractures and local fragmentation. These features are attributed to the first stage of seismic activity along the Nojima fault characterized by laumontite as the main sealing mineral. EBSD mapping was used in order to characterize the crystallographic orientation and deformation microstructures in the sample, and X-ray microdiffraction was used to measure elastic strain and residual stresses on each point of the mapped quartz grain. Both methods give consistent results on the crystallographic orientation and show small and short wavelength misorientations associated with laumontite-sealed microfractures and alignments of tiny fluid inclusions. Deformation microstructures in quartz are symptomatic of the semi-brittle faulting regime, in which low-temperature brittle plastic deformation and stress-driven dissolution-deposition processes occur conjointly. This deformation occurred at a 3.7-11.1 km depth interval as indicated by the laumontite stability domain. Residual stresses are calculated from deviatoric elastic strain tensor measured using X-ray Laue microdiffraction using the Hooke's law. The modal value of the von Mises stress distribution is at 100 MPa and the mean at 141 MPa. Such stress values are comparable to the peak strength of a

  3. High stresses stored in fault zones: example of the Nojima fault (Japan

    Directory of Open Access Journals (Sweden)

    A.-M. Boullier

    2018-04-01

    Full Text Available During the last decade pulverized rocks have been described on outcrops along large active faults and attributed to damage related to a propagating seismic rupture front. Questions remain concerning the maximal lateral distance from the fault plane and maximal depth for dynamic damage to be imprinted in rocks. In order to document these questions, a representative core sample of granodiorite located 51.3 m from the Nojima fault (Japan that was drilled after the Hyogo-ken Nanbu (Kobe earthquake is studied by using electron backscattered diffraction (EBSD and high-resolution X-ray Laue microdiffraction. Although located outside of the Nojima damage fault zone and macroscopically undeformed, the sample shows pervasive microfractures and local fragmentation. These features are attributed to the first stage of seismic activity along the Nojima fault characterized by laumontite as the main sealing mineral. EBSD mapping was used in order to characterize the crystallographic orientation and deformation microstructures in the sample, and X-ray microdiffraction was used to measure elastic strain and residual stresses on each point of the mapped quartz grain. Both methods give consistent results on the crystallographic orientation and show small and short wavelength misorientations associated with laumontite-sealed microfractures and alignments of tiny fluid inclusions. Deformation microstructures in quartz are symptomatic of the semi-brittle faulting regime, in which low-temperature brittle plastic deformation and stress-driven dissolution-deposition processes occur conjointly. This deformation occurred at a 3.7–11.1 km depth interval as indicated by the laumontite stability domain. Residual stresses are calculated from deviatoric elastic strain tensor measured using X-ray Laue microdiffraction using the Hooke's law. The modal value of the von Mises stress distribution is at 100 MPa and the mean at 141 MPa. Such stress values are comparable to

  4. Research on vibration signal analysis and extraction method of gear local fault

    Science.gov (United States)

    Yang, X. F.; Wang, D.; Ma, J. F.; Shao, W.

    2018-02-01

    Gear is the main connection parts and power transmission parts in the mechanical equipment. If the fault occurs, it directly affects the running state of the whole machine and even endangers the personal safety. So it has important theoretical significance and practical value to study on the extraction of the gear fault signal and fault diagnosis of the gear. In this paper, the gear local fault as the research object, set up the vibration model of gear fault vibration mechanism, derive the vibration mechanism of the gear local fault and analyzes the similarities and differences of the vibration signal between the gear non fault and the gears local faults. In the MATLAB environment, the wavelet transform algorithm is used to denoise the fault signal. Hilbert transform is used to demodulate the fault vibration signal. The results show that the method can denoise the strong noise mechanical vibration signal and extract the local fault feature information from the fault vibration signal..

  5. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  6. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    Science.gov (United States)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  7. Subaru FATS (fault tracking system)

    Science.gov (United States)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  8. Soil radon levels across the Amer fault

    International Nuclear Information System (INIS)

    Font, Ll.; Baixeras, C.; Moreno, V.; Bach, J.

    2008-01-01

    Soil radon levels have been measured across the Amer fault, which is located near the volcanic region of La Garrotxa, Spain. Both passive (LR-115, time-integrating) and active (Clipperton II, time-resolved) detectors have been used in a survey in which 27 measurement points were selected in five lines perpendicular to the Amer fault in the village area of Amer. The averaged results show an influence of the distance to the fault on the mean soil radon values. The dynamic results show a very clear seasonal effect on the soil radon levels. The results obtained support the hypothesis that the fault is still active

  9. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  10. Data-Reconciliation Based Fault-Tolerant Model Predictive Control for a Biomass Boiler

    Directory of Open Access Journals (Sweden)

    Palash Sarkar

    2017-02-01

    Full Text Available This paper presents a novel, effective method to handle critical sensor faults affecting a control system devised to operate a biomass boiler. In particular, the proposed method consists of integrating a data reconciliation algorithm in a model predictive control loop, so as to annihilate the effects of faults occurring in the sensor of the flue gas oxygen concentration, by feeding the controller with the reconciled measurements. Indeed, the oxygen content in flue gas is a key variable in control of biomass boilers due its close connections with both combustion efficiency and polluting emissions. The main benefit of including the data reconciliation algorithm in the loop, as a fault tolerant component, with respect to applying standard fault tolerant methods, is that controller reconfiguration is not required anymore, since the original controller operates on the restored, reliable data. The integrated data reconciliation–model predictive control (MPC strategy has been validated by running simulations on a specific type of biomass boiler—the KPA Unicon BioGrate boiler.

  11. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults

    OpenAIRE

    Rui Sun; Qi Cheng; Guanyu Wang; Washington Yotto Ochieng

    2017-01-01

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs’ flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in ...

  12. Actuator Location and Voltages Optimization for Shape Control of Smart Beams Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Georgios E. Stavroulakis

    2013-10-01

    Full Text Available This paper presents a numerical study on optimal voltages and optimal placement of piezoelectric actuators for shape control of beam structures. A finite element model, based on Timoshenko beam theory, is developed to characterize the behavior of the structure and the actuators. This model accounted for the electromechanical coupling in the entire beam structure, due to the fact that the piezoelectric layers are treated as constituent parts of the entire structural system. A hybrid scheme is presented based on great deluge and genetic algorithm. The hybrid algorithm is implemented to calculate the optimal locations and optimal values of voltages, applied to the piezoelectric actuators glued in the structure, which minimize the error between the achieved and the desired shape. Results from numerical simulations demonstrate the capabilities and efficiency of the developed optimization algorithm in both clamped−free and clamped−clamped beam problems are presented.

  13. Recent tectonic stress field, active faults and geothermal fields (hot-water type) in China

    Science.gov (United States)

    Wan, Tianfeng

    1984-10-01

    It is quite probable that geothermal fields of the hot-water type in China do not develop in the absence of recently active faults. Such active faults are all controlled by tectonic stress fields. Using the data of earthquake fault-plane solutions, active faults, and surface thermal manifestations, a map showing the recent tectonic stress field, and the location of active faults and geothermal fields in China is presented. Data collected from 89 investigated prospects with geothermal manifestations indicate that the locations of geothermal fields are controlled by active faults and the recent tectonic stress field. About 68% of the prospects are controlled by tensional or tensional-shear faults. The angle between these faults and the direction of maximum compressive stress is less than 45°, and both tend to be parallel. About 15% of the prospects are controlled by conjugate faults. Another 14% are controlled by compressive-shear faults where the angle between these faults and the direction maximum compressive stress is greater than 45°.

  14. How is tectonic slip partitioned from the Alpine Fault to the Marlborough Fault System? : results from the Hope Fault

    International Nuclear Information System (INIS)

    Langridge, R.M.

    2004-01-01

    This report contains data from research undertaken by the author on the Hope Fault from 2000-2004. This report provides an opportunity to include data that was additional to or newer than work that was published in other places. New results from studies along the Hurunui section of the Hope Fault, additional to that published in Langridge and Berryman (2005) are presented here. This data includes tabulated data of fault location and description measurements, a graphical representation of this data in diagrammatic form along the length of the fault and new radiocarbon dates from the current EQC funded project. The new data show that the Hurunui section of the Hope Fault has the capability to yield further data on fault slip rate, earthquake displacements, and paleoseismicity. New results from studies at the Greenburn Stream paleoseismic site additional to that published in Langridge et al. (2003) are presented here. This includes a new log of the deepened west wall of Trench 2, a log of the west wall of Trench 1, and new radiocarbon dates from the second phase of dating undertaken at the Greenburn Stream site. The new data show that this site has the capability to yield further data on the paleoseismicity of the Conway segment of the Hope Fault. Through a detailed analysis of all three logged walls at the site and the new radiocarbon dates, it may, in combination with data from the nearby Clarence Reserve site of Pope (1994), be possible to develop a good record of the last 5 events on the Conway segment. (author). 12 refs., 12 figs

  15. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    Science.gov (United States)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  16. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  17. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    Science.gov (United States)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  18. Application of genetic neural network in steam generator fault diagnosing

    International Nuclear Information System (INIS)

    Lin Xiaogong; Jiang Xingwei; Liu Tao; Shi Xiaocheng

    2005-01-01

    In the paper, a new algorithm which neural network and genetic algorithm are mixed is adopted, aiming at the problems of slow convergence rate and easily falling into part minimums in network studying of traditional BP neural network, and used in the fault diagnosis of steam generator. The result shows that this algorithm can solve the convergence problem in the network trains effectively. (author)

  19. Single terminal fault location by natural frequencies of travelling wave considering multiple harmonics%一种考虑多次谐波的行波自然频率测距方法

    Institute of Scientific and Technical Information of China (English)

    李金泽; 李宝才; 翟学明

    2016-01-01

    在基于行波自然频率的输电线路单端故障定位方法中,主自然频率值的准确度是进行故障点精确定位的关键。目前的主自然频率的提取大多采用小波变换、MUSIC 方法,小波分析受所选小波基影响较大,MUSIC 的参数选择对频谱估计影响较大,它们都未能很好地解决这一问题。提出一种基于故障线路自然频率的单端测距新方法。该方法在提取主自然频率过程中首先对行波信号进行 EEMD 分解,并用 ICA 方法进行正交化处理,从而抑制 WVD 本身存在交叉项的问题,然后对各个分量进行 WVD 转换并叠加,获得正交的自然频率谱;进而综合考虑基波和多次谐波求取全局主自然频率。EMTDC 仿真实验验证了该算法在不同故障类型、故障距离、过渡电阻和噪声情况下的可行性及其精度。%In the single terminal fault locating method of transmission line based on traveling wave natural frequency, the accuracy of extracting primary natural frequency is the key to caring out to pinpoint trouble spots in. Currently, wavelet transform and MUSIC method are commonly used for extracting primary natural frequency. Wavelet analysis is influenced by the selected wavelets and the parameters’ selection greatly impacts spectral estimation in MUSIC, which can’t solve this problem well. A new single ended fault location method of extracting faulted line natural frequencies is described. The traveling wave signal is decomposed by EEMD and orthogonal process is made with ICA method to suppress the WVD's problem of cross-term, and then each component of WVD is converted and superimposed to obtain the natural frequency spectrum orthogonal. Then the global primary natural frequency is obtained considering the fundamental and harmonics. Simulation experiment by EMTDC confirms the feasibility and accuracy of the proposed algorithm under different fault types, fault distance, transition resistance

  20. Intelligent fault diagnosis of rolling bearing based on kernel neighborhood rough sets and statistical features

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xiao Ran; Zhang, You Yun; Zhu, Yong Sheng [Xi' an Jiaotong Univ., Xi' an (China)

    2012-09-15

    Intelligent fault diagnosis benefits from efficient feature selection. Neighborhood rough sets are effective in feature selection. However, determining the neighborhood value accurately remains a challenge. The wrapper feature selection algorithm is designed by combining the kernel method and neighborhood rough sets to self-adaptively select sensitive features. The combination effectively solves the shortcomings in selecting the neighborhood value in the previous application process. The statistical features of time and frequency domains are used to describe the characteristic of the rolling bearing to make the intelligent fault diagnosis approach work. Three classification algorithms, namely, classification and regression tree (CART), commercial version 4.5 (C4.5), and radial basis function support vector machines (RBFSVM), are used to test UCI datasets and 10 fault datasets of rolling bearing. The results indicate that the diagnostic approach presented could effectively select the sensitive fault features and simultaneously identify the type and degree of the fault.

  1. Intelligent fault diagnosis of rolling bearing based on kernel neighborhood rough sets and statistical features

    International Nuclear Information System (INIS)

    Zhu, Xiao Ran; Zhang, You Yun; Zhu, Yong Sheng

    2012-01-01

    Intelligent fault diagnosis benefits from efficient feature selection. Neighborhood rough sets are effective in feature selection. However, determining the neighborhood value accurately remains a challenge. The wrapper feature selection algorithm is designed by combining the kernel method and neighborhood rough sets to self-adaptively select sensitive features. The combination effectively solves the shortcomings in selecting the neighborhood value in the previous application process. The statistical features of time and frequency domains are used to describe the characteristic of the rolling bearing to make the intelligent fault diagnosis approach work. Three classification algorithms, namely, classification and regression tree (CART), commercial version 4.5 (C4.5), and radial basis function support vector machines (RBFSVM), are used to test UCI datasets and 10 fault datasets of rolling bearing. The results indicate that the diagnostic approach presented could effectively select the sensitive fault features and simultaneously identify the type and degree of the fault

  2. Fethiye-Burdur Fault Zone (SW Turkey): a myth?

    Science.gov (United States)

    Kaymakci, Nuretdin; Langereis, Cornelis; Özkaptan, Murat; Özacar, Arda A.; Gülyüz, Erhan; Uzel, Bora; Sözbilir, Hasan

    2017-04-01

    Fethiye Burdur Fault Zone (FBFZ) is first proposed by Dumont et al. (1979) as a sinistral strike-slip fault zone as the NE continuation of Pliny-Strabo trench in to the Anatolian Block. The fault zone supposed to accommodate at least 100 km sinistral displacement between the Menderes Massif and the Beydaǧları platform during the exhumation of the Menderes Massif, mainly during the late Miocene. Based on GPS velocities Barka and Reilinger (1997) proposed that the fault zone is still active and accommodates sinistral displacement. In order to test the presence and to unravel its kinematics we have conducted a rigorous paleomagnetic study containing more than 3000 paleomagnetic samples collected from 88 locations and 11700 fault slip data collected from 198 locations distributed evenly all over SW Anatolia spanning from Middle Miocene to Late Pliocene. The obtained rotation senses and amounts indicate slight (around 20°) counter-clockwise rotations distributed uniformly almost whole SW Anatolia and there is no change in the rotation senses and amounts on either side of the FBFZ implying no differential rotation within the zone. Additionally, the slickenside pitches and constructed paleostress configurations, along the so called FBFZ and also within the 300 km diameter of the proposed fault zone, indicated that almost all the faults, oriented parallel to subparallel to the zone, are normal in character. The fault slip measurements are also consistent with earthquake focal mechanisms suggesting active extension in the region. We have not encountered any significant strike-slip motion in the region to support presence and transcurrent nature of the FBFZ. On the contrary, the region is dominated by extensional deformation and strike-slip components are observed only on the NW-SE striking faults which are transfer faults that accommodated extension and normal motion. Therefore, we claim that the sinistral Fethiye Burdur Fault (Zone) is a myth and there is no tangible

  3. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    Science.gov (United States)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  4. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  5. Calculation of critical fault recovery time for nonlinear systems based on region of attraction analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Blanke, Mogens

    2014-01-01

    of a system. It must be guaranteed that the trajectory of a system subject to fault remains in the region of attraction (ROA) of the post-fault system during this time. This paper proposes a new algorithm to compute the critical fault recovery time for nonlinear systems with polynomial vector elds using sum...

  6. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    Science.gov (United States)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  7. A Self-Learning Sensor Fault Detection Framework for Industry Monitoring IoT

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2013-01-01

    Full Text Available Many applications based on Internet of Things (IoT technology have recently founded in industry monitoring area. Thousands of sensors with different types work together in an industry monitoring system. Sensors at different locations can generate streaming data, which can be analyzed in the data center. In this paper, we propose a framework for online sensor fault detection. We motivate our technique in the context of the problem of the data value fault detection and event detection. We use the Statistics Sliding Windows (SSW to contain the recent sensor data and regress each window by Gaussian distribution. The regression result can be used to detect the data value fault. Devices on a production line may work in different workloads and the associate sensors will have different status. We divide the sensors into several status groups according to different part of production flow chat. In this way, the status of a sensor is associated with others in the same group. We fit the values in the Status Transform Window (STW to get the slope and generate a group trend vector. By comparing the current trend vector with history ones, we can detect a rational or irrational event. In order to determine parameters for each status group we build a self-learning worker thread in our framework which can edit the corresponding parameter according to the user feedback. Group-based fault detection (GbFD algorithm is proposed in this paper. We test the framework with a simulation dataset extracted from real data of an oil field. Test result shows that GbFD detects 95% sensor fault successfully.

  8. The relationship of near-surface active faulting to megathrust splay fault geometry in Prince William Sound, Alaska

    Science.gov (United States)

    Finn, S.; Liberty, L. M.; Haeussler, P. J.; Northrup, C.; Pratt, T. L.

    2010-12-01

    We interpret regionally extensive, active faults beneath Prince William Sound (PWS), Alaska, to be structurally linked to deeper megathrust splay faults, such as the one that ruptured in the 1964 M9.2 earthquake. Western PWS in particular is unique; the locations of active faulting offer insights into the transition at the southern terminus of the previously subducted Yakutat slab to Pacific plate subduction. Newly acquired high-resolution, marine seismic data show three seismic facies related to Holocene and older Quaternary to Tertiary strata. These sediments are cut by numerous high angle normal faults in the hanging wall of megathrust splay. Crustal-scale seismic reflection profiles show splay faults emerging from 20 km depth between the Yakutat block and North American crust and surfacing as the Hanning Bay and Patton Bay faults. A distinct boundary coinciding beneath the Hinchinbrook Entrance causes a systematic fault trend change from N30E in southwestern PWS to N70E in northeastern PWS. The fault trend change underneath Hinchinbrook Entrance may occur gradually or abruptly and there is evidence for similar deformation near the Montague Strait Entrance. Landward of surface expressions of the splay fault, we observe subsidence, faulting, and landslides that record deformation associated with the 1964 and older megathrust earthquakes. Surface exposures of Tertiary rocks throughout PWS along with new apatite-helium dates suggest long-term and regional uplift with localized, fault-controlled subsidence.

  9. FAULT-TOLERANT DESIGN FOR ADVANCED DIVERSE PROTECTION SYSTEM

    Directory of Open Access Journals (Sweden)

    YANG GYUN OH

    2013-11-01

    Full Text Available For the improvement of APR1400 Diverse Protection System (DPS design, the Advanced DPS (ADPS has recently been developed to enhance the fault tolerance capability of the system. Major fault masking features of the ADPS compared with the APR1400 DPS are the changes to the channel configuration and reactor trip actuation equipment. To minimize the fault occurrences within the ADPS, and to mitigate the consequences of common-cause failures (CCF within the safety I&C systems, several fault avoidance design features have been applied in the ADPS. The fault avoidance design features include the changes to the system software classification, communication methods, equipment platform, MMI equipment, etc. In addition, the fault detection, location, containment, and recovery processes have been incorporated in the ADPS design. Therefore, it is expected that the ADPS can provide an enhanced fault tolerance capability against the possible faults within the system and its input/output equipment, and the CCF of safety systems.

  10. Radon anomalies along faults in North of Jordan

    International Nuclear Information System (INIS)

    Al-Tamimi, M.H.; Abumurad, K.M.

    2001-01-01

    Radon emanation was sampled in five locations in a limestone quarry area using SSNTDs CR-39. Radon levels in the soil air at four different well-known traceable fault planes were measured along a traverse line perpendicular to each of these faults. Radon levels at the fault were higher by a factor of 3-10 than away from the faults. However, some sites have broader shoulders than the others. The method was applied along a fifth inferred fault zone. The results show anomalous radon level in the sampled station near the fault zone, which gave a radon value higher by three times than background. This study draws its importance from the fact that in Jordan many cities and villages have been established over an intensive faulted land. Also, our study has considerable implications for the future radon mapping. Moreover, radon gas is proved to be a good tool for fault zones detection

  11. Ontology-Based Method for Fault Diagnosis of Loaders.

    Science.gov (United States)

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  12. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest.

    Science.gov (United States)

    Ma, Suliang; Chen, Mingxuan; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-04-16

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods.

  13. Cause and effect analysis by fuzzy relational equations and a genetic algorithm

    International Nuclear Information System (INIS)

    Rotshtein, Alexander P.; Posner, Morton; Rakytyanska, Hanna B.

    2006-01-01

    This paper proposes using a genetic algorithm as a tool to solve the fault diagnosis problem. The fault diagnosis problem is based on a cause and effect analysis which is formally described by fuzzy relations. Fuzzy relations are formed on the basis of expert assessments. Application of expert fuzzy relations to restore and identify the causes through the observed effects requires the solution to a system of fuzzy relational equations. In this study this search for a solution amounts to solving a corresponding optimization problem. An optimization algorithm is based on the application of genetic operations of crossover, mutation and selection. The genetic algorithm suggested here represents an application in expert systems of fault diagnosis and quality control

  14. Illite authigenesis during faulting and fluid flow - a microstructural study of fault rocks

    Science.gov (United States)

    Scheiber, Thomas; Viola, Giulio; van der Lelij, Roelant; Margreth, Annina

    2017-04-01

    Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro-domains were physically separated by

  15. Architecture and Fault Identification of Wide-area Protection System

    Directory of Open Access Journals (Sweden)

    Yuxue Wang

    2012-09-01

    Full Text Available Wide-area protection system (WAPS is widely studied for the purpose of improvng the performance of conventional backup protection. In this paper, the system architecture of WAPS is proposed and its key technologies are discussed in view of engineering projects. So a mixed structurecentralized-distributed structure which is more suitable for WAPS in limited power grid region, is obtained based on the advantages of the centralized structure and distributed structure. Furthermore, regional distance protection algorithm was taken as an example to illustrate the functions of the constituent units. Faulted components can be detected based on multi-source imformation fuse in the algorithm. And the algorithm cannot only improve the selectivity, the rapidity, and the reliability of relaying protection but also has high fault tolerant capability. A simulation of 220 kV grid systems in Easter Hubei province shows the effectiveness of the wide-area protection system presented by this paper.

  16. Apparatus including a plurality of spaced transformers for locating short circuits in cables

    Science.gov (United States)

    Cason, R. L.; Mcstay, J. J. (Inventor)

    1978-01-01

    A cable fault locator is described for sensing faults such as short circuits in power cables. The apparatus includes a plurality of current transformers strategically located along a cable. Trigger circuits are connected to each of the current transformers for placing a resistor in series with a resistive element responsive to an abnormally high current flowing through that portion of the cable. By measuring the voltage drop across the resistive element, the location of the fault can be determined.

  17. Satellite Fault Diagnosis Using Support Vector Machines Based on a Hybrid Voting Mechanism

    Science.gov (United States)

    Yang, Shuqiang; Zhu, Xiaoqian; Jin, Songchang; Wang, Xiang

    2014-01-01

    The satellite fault diagnosis has an important role in enhancing the safety, reliability, and availability of the satellite system. However, the problem of enormous parameters and multiple faults makes a challenge to the satellite fault diagnosis. The interactions between parameters and misclassifications from multiple faults will increase the false alarm rate and the false negative rate. On the other hand, for each satellite fault, there is not enough fault data for training. To most of the classification algorithms, it will degrade the performance of model. In this paper, we proposed an improving SVM based on a hybrid voting mechanism (HVM-SVM) to deal with the problem of enormous parameters, multiple faults, and small samples. Many experimental results show that the accuracy of fault diagnosis using HVM-SVM is improved. PMID:25215324

  18. Satellite Fault Diagnosis Using Support Vector Machines Based on a Hybrid Voting Mechanism

    Directory of Open Access Journals (Sweden)

    Hong Yin

    2014-01-01

    Full Text Available The satellite fault diagnosis has an important role in enhancing the safety, reliability, and availability of the satellite system. However, the problem of enormous parameters and multiple faults makes a challenge to the satellite fault diagnosis. The interactions between parameters and misclassifications from multiple faults will increase the false alarm rate and the false negative rate. On the other hand, for each satellite fault, there is not enough fault data for training. To most of the classification algorithms, it will degrade the performance of model. In this paper, we proposed an improving SVM based on a hybrid voting mechanism (HVM-SVM to deal with the problem of enormous parameters, multiple faults, and small samples. Many experimental results show that the accuracy of fault diagnosis using HVM-SVM is improved.

  19. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    Science.gov (United States)

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  20. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  1. New active faults on Eurasian-Arabian collision zone: Tectonic activity of Özyurt and Gülsünler faults (Eastern Anatolian Plateau, Van-Turkey)

    Energy Technology Data Exchange (ETDEWEB)

    Dicle, S.; Üner, S.

    2017-11-01

    The Eastern Anatolian Plateau emerges from the continental collision between Arabian and Eurasian plates where intense seismicity related to the ongoing convergence characterizes the southern part of the plateau. Active deformation in this zone is shared by mainly thrust and strike-slip faults. The Özyurt thrust fault and the Gülsünler sinistral strike-slip fault are newly determined fault zones, located to the north of Van city centre. Different types of faults such as thrust, normal and strike-slip faults are observed on the quarry wall excavated in Quaternary lacustrine deposits at the intersection zone of these two faults. Kinematic analysis of fault-slip data has revealed coeval activities of transtensional and compressional structures for the Lake Van Basin. Seismological and geomorphological characteristics of these faults demonstrate the capability of devastating earthquakes for the area.

  2. New active faults on Eurasian-Arabian collision zone: Tectonic activity of Özyurt and Gülsünler faults (Eastern Anatolian Plateau, Van-Turkey)

    International Nuclear Information System (INIS)

    Dicle, S.; Üner, S.

    2017-01-01

    The Eastern Anatolian Plateau emerges from the continental collision between Arabian and Eurasian plates where intense seismicity related to the ongoing convergence characterizes the southern part of the plateau. Active deformation in this zone is shared by mainly thrust and strike-slip faults. The Özyurt thrust fault and the Gülsünler sinistral strike-slip fault are newly determined fault zones, located to the north of Van city centre. Different types of faults such as thrust, normal and strike-slip faults are observed on the quarry wall excavated in Quaternary lacustrine deposits at the intersection zone of these two faults. Kinematic analysis of fault-slip data has revealed coeval activities of transtensional and compressional structures for the Lake Van Basin. Seismological and geomorphological characteristics of these faults demonstrate the capability of devastating earthquakes for the area.

  3. Fault Detection of Aircraft Cable via Spread Spectrum Time Domain Reflectometry

    Directory of Open Access Journals (Sweden)

    Xudong SHI

    2014-03-01

    Full Text Available As the airplane cable fault detection based on TDR (time domain reflectometry is affected easily by various noise signals, which makes the reflected signal attenuate and distort heavily, failing to locate the fault. In order to solve these problems, a method of spread spectrum time domain reflectometry (SSTDR is introduced in this paper, taking the advantage of the sharp peak of correlation function. The test signal is generated from ML sequence (MLS modulated by sine wave in the same frequency. Theoretically, the test signal has the very high immunity of noise, which can be applied with excellent precision to fault location on the aircraft cable. In this paper, the method of SSTDR was normally simulated in MATLAB. Then, an experimental setup, based on LabVIEW, was organized to detect and locate the fault on the aircraft cable. It has been demonstrated that SSTDR has the high immunity of noise, reducing some detection errors effectively.

  4. A Wideband Magnetoresistive Sensor for Monitoring Dynamic Fault Slip in Laboratory Fault Friction Experiments.

    Science.gov (United States)

    Kilgore, Brian D

    2017-12-02

    A non-contact, wideband method of sensing dynamic fault slip in laboratory geophysical experiments employs an inexpensive magnetoresistive sensor, a small neodymium rare earth magnet, and user built application-specific wideband signal conditioning. The magnetoresistive sensor generates a voltage proportional to the changing angles of magnetic flux lines, generated by differential motion or rotation of the near-by magnet, through the sensor. The performance of an array of these sensors compares favorably to other conventional position sensing methods employed at multiple locations along a 2 m long × 0.4 m deep laboratory strike-slip fault. For these magnetoresistive sensors, the lack of resonance signals commonly encountered with cantilever-type position sensor mounting, the wide band response (DC to ≈ 100 kHz) that exceeds the capabilities of many traditional position sensors, and the small space required on the sample, make them attractive options for capturing high speed fault slip measurements in these laboratory experiments. An unanticipated observation of this study is the apparent sensitivity of this sensor to high frequency electomagnetic signals associated with fault rupture and (or) rupture propagation, which may offer new insights into the physics of earthquake faulting.

  5. ACO-Initialized Wavelet Neural Network for Vibration Fault Diagnosis of Hydroturbine Generating Unit

    Directory of Open Access Journals (Sweden)

    Zhihuai Xiao

    2015-01-01

    Full Text Available Considering the drawbacks of traditional wavelet neural network, such as low convergence speed and high sensitivity to initial parameters, an ant colony optimization- (ACO- initialized wavelet neural network is proposed in this paper for vibration fault diagnosis of a hydroturbine generating unit. In this method, parameters of the wavelet neural network are initialized by the ACO algorithm, and then the wavelet neural network is trained by the gradient descent algorithm. Amplitudes of the frequency components of the hydroturbine generating unit vibration signals are used as feature vectors for wavelet neural network training to realize mapping relationship from vibration features to fault types. A real vibration fault diagnosis case result of a hydroturbine generating unit shows that the proposed method has faster convergence speed and stronger generalization ability than the traditional wavelet neural network and ACO wavelet neural network. Thus it can provide an effective solution for online vibration fault diagnosis of a hydroturbine generating unit.

  6. Faults Diagnostics of Railway Axle Bearings Based on IMF’s Confidence Index Algorithm for Ensemble EMD

    Science.gov (United States)

    Yi, Cai; Lin, Jianhui; Zhang, Weihua; Ding, Jianming

    2015-01-01

    As train loads and travel speeds have increased over time, railway axle bearings have become critical elements which require more efficient non-destructive inspection and fault diagnostics methods. This paper presents a novel and adaptive procedure based on ensemble empirical mode decomposition (EEMD) and Hilbert marginal spectrum for multi-fault diagnostics of axle bearings. EEMD overcomes the limitations that often hypothesize about data and computational efforts that restrict the application of signal processing techniques. The outputs of this adaptive approach are the intrinsic mode functions that are treated with the Hilbert transform in order to obtain the Hilbert instantaneous frequency spectrum and marginal spectrum. Anyhow, not all the IMFs obtained by the decomposition should be considered into Hilbert marginal spectrum. The IMFs’ confidence index arithmetic proposed in this paper is fully autonomous, overcoming the major limit of selection by user with experience, and allows the development of on-line tools. The effectiveness of the improvement is proven by the successful diagnosis of an axle bearing with a single fault or multiple composite faults, e.g., outer ring fault, cage fault and pin roller fault. PMID:25970256

  7. Methods for Fault Diagnosability Analysis of a Class of Affine Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Xiafu Peng

    2015-01-01

    Full Text Available The fault diagnosability analysis for a given model, before developing a diagnosis algorithm, can be used to answer questions like “can the fault fi be detected by observed states?” and “can it separate fault fi from fault fj by observed states?” If not, we should redesign the sensor placement. This paper deals with the problem of the evaluation of detectability and separability for the diagnosability analysis of affine nonlinear system. First, we used differential geometry theory to analyze the nonlinear system and proposed new detectability criterion and separability criterion. Second, the related matrix between the faults and outputs of the system and the fault separable matrix are designed for quantitative fault diagnosability calculation and fault separability calculation, respectively. Finally, we illustrate our approach to exemplify how to analyze diagnosability by a certain nonlinear system example, and the experiment results indicate the effectiveness of the fault evaluation methods.

  8. Breaking the fault tree circular logic

    International Nuclear Information System (INIS)

    Lankin, M.

    2000-01-01

    Event tree - fault tree approach to model failures of nuclear plants as well as of other complex facilities is noticeably dominant now. This approach implies modeling an object in form of unidirectional logical graph - tree, i.e. graph without circular logic. However, genuine nuclear plants intrinsically demonstrate quite a few logical loops (circular logic), especially where electrical systems are involved. This paper shows the incorrectness of existing practice of circular logic breaking by elimination of part of logical dependencies and puts forward a formal algorithm, which enables the analyst to correctly model the failure of complex object, which involves logical dependencies between system and components, in form of fault tree. (author)

  9. Surveillance and fault diagnosis for power plants in the Netherlands: operational experience

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.; Hagen, T.H.J.J. van der

    1998-01-01

    Nuclear Power Plant (NPP) surveillance and fault diagnosis systems in Dutch Borssele (PWR) and Dodewaard (BWR) power plants are summarized. Deterministic and stochastic models and artificial intelligence (AI) methodologies effectively process the information from the sensors. The processing is carried out by means of methods and algorithms that are collectively referred to Power Reactor Noise Fault Diagnosis. Two main schemes used are failure detection and instrument fault detection. In addition to conventional and advanced modern fault diagnosis methodologies involved, also the applications of emerging technologies in Dutch reactors are given and examples from operational experience are presented. (author)

  10. Fault Analysis for Protection Purposes in Maritime Applications

    DEFF Research Database (Denmark)

    Ciontea, Catalin-Iosif; Bak, Claus Leth; Blaabjerg, Frede

    2016-01-01

    in different locations of the network. The fault current is measured using the curent transformers that are already present in the system as part of a time-inverse overcurrent protection. Simulation results show that the symmetrical components of the currents seen by these current transformers can be used...... to detect the electric fault. The method also provides an improved fault detection over the conventional overcurrent relays in some situations. All results are obtain using MATLAB/Simulink and are briefly discussed in this paper....

  11. Inelastic response evaluation of steel frame structure subjected to near-fault ground motions

    Energy Technology Data Exchange (ETDEWEB)

    Choi, In Kil; Kim, Hyung Kyu; Choun, Young Sun; Seo, Jeong Moon

    2004-04-01

    A survey on some of the Quaternary fault segments near the Korean nuclear power plants is ongoing. It is likely that these faults would be identified as active ones. If the faults are confirmed as active ones, it will be necessary to reevaluate the seismic safety of nuclear power plants located near the fault. This study was performed to acquire overall knowledge of near-fault ground motions and evaluate inealstic response characteristics of near-fault ground motions. Although Korean peninsular is not located in the strong earthquake region, it is necessary to evaluate seismic safety of NPP for the earthquakes occurred in near-fault area with characteristics different from that of general far-fault earthquakes in order to improve seismic safety of existing NPP structures and equipment. As a result, for the seismic safety evaluation of NPP structures and equipment considering near-fault effects, this report will give many valuable information. In order to improve seismic safety of NPP structures and equipment against near-fault ground motions, it is necessary to consider inelastic response characteristics of near-fault ground motions in current design code. Also in Korea where these studies are immature yet, in the future more works of near-fault earthquakes must be accomplished.

  12. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    Science.gov (United States)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  13. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  14. Detection and Identification of Loss of Efficiency Faults of Flight Actuators

    Directory of Open Access Journals (Sweden)

    Ossmann Daniel

    2015-03-01

    Full Text Available We propose linear parameter-varying (LPV model-based approaches to the synthesis of robust fault detection and diagnosis (FDD systems for loss of efficiency (LOE faults of flight actuators. The proposed methods are applicable to several types of parametric (or multiplicative LOE faults such as actuator disconnection, surface damage, actuator power loss or stall loads. For the detection of these parametric faults, advanced LPV-model detection techniques are proposed, which implicitly provide fault identification information. Fast detection of intermittent stall loads (seen as nuisances, rather than faults is important in enhancing the performance of various fault detection schemes dealing with large input signals. For this case, a dedicated fast identification algorithm is devised. The developed FDD systems are tested on a nonlinear actuator model which is implemented in a full nonlinear aircraft simulation model. This enables the validation of the FDD system’s detection and identification characteristics under realistic conditions.

  15. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil); Pereira, Iraci Martinez; Silva, Antonio Teixeira e, E-mail: martinez@ipen.b, E-mail: teixeira@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  16. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez; Silva, Antonio Teixeira e

    2011-01-01

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  17. Wavelet-Based Feature Extraction in Fault Diagnosis for Biquad High-Pass Filter Circuit

    OpenAIRE

    Yuehai Wang; Yongzheng Yan; Qinyong Wang

    2016-01-01

    Fault diagnosis for analog circuit has become a prominent factor in improving the reliability of integrated circuit due to its irreplaceability in modern integrated circuits. In fact fault diagnosis based on intelligent algorithms has become a popular research topic as efficient feature extraction and selection are a critical and intricate task in analog fault diagnosis. Further, it is extremely important to propose some general guidelines for the optimal feature extraction and selection. In ...

  18. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  19. Logs of Paleoseismic Excavations Across the Central Range Fault, Trinidad

    Science.gov (United States)

    Crosby, Christopher J.; Prentice, Carol S.; Weber, John; Ragona, Daniel

    2009-01-01

    This publication makes available maps and trench logs associated with studies of the Central Range Fault, part of the South American-Caribbean plate boundary in Trinidad. Our studies were conducted in 2001 and 2002. We mapped geomorphic features indicative of active faulting along the right-lateral, Central Range Fault, part of the South American-Caribbean plate boundary in Trinidad. We excavated trenches at two sites, the Samlalsingh and Tabaquite sites. At the Samlalsingh site, sediments deposited after the most recent fault movement bury the fault, and the exact location of the fault was unknown until we exposed it in our excavations. At this site, we excavated a total of eleven trenches, six of which exposed the fault. The trenches exposed fluvial sediments deposited over a strath terrace developed on Miocene bedrock units. We cleaned the walls of the excavations, gridded the walls with either 1 m X 1 m or 1 m X 0.5 m nail and string grid, and logged the walls in detail at a scale of 1:20. Additionally, we described the different sedimentary units in the field, incorporating these descriptions into our trench logs. We mapped the locations of the trenches using a tape and compass. Our field logs were scanned, and unit contacts were traced in Adobe Illustrator. The final drafted logs of all the trenches are presented here, along with photographs showing important relations among faults and Holocene sedimentary deposits. Logs of south walls were reversed in Illustrator, so that all logs are drafted with the view direction to the north. We collected samples of various materials exposed in the trench walls, including charcoal samples for radiocarbon dating from both faulted and unfaulted deposits. The locations of all samples collected are shown on the logs. The ages of seventeen of the charcoal samples submitted for radiocarbon analysis at the University of Arizona Accelerator Mass Spectrometry Laboratory in Tucson, Ariz., are given in Table 1. Samples found in

  20. Design & Evaluation of a Protection Algorithm for a Wind Turbine Generator based on the fault-generated Symmetrical Components

    DEFF Research Database (Denmark)

    Zheng, T. Y.; Cha, Seung-Tae; Lee, B. E.

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on the fault-generated symmetrical components is proposed in the paper. At stage 1, the relay uses the magnitude of the positive-sequence component in the fault current to distinguish faults on a parallel WTG, connected to the same feeder......, or on an adjacent feeder from those on the connected feeder, on the collection bus, at an inter-tie or at a grid. For the former faults, the relay should remain stable and inoperative whilst the instantaneous or delayed tripping is required for the latter faults. At stage 2, the fault type is first evaluated using...... the relationships of the fault-generated symmetrical components. Then, the magnitude of the positive-sequence component in the fault current is used again to decide on either instantaneous or delayed operation. The operating performance of the relay is then verified using various fault scenarios modelled using...

  1. Fault detection in multiply-redundant measurement systems via sequential testing

    International Nuclear Information System (INIS)

    Ray, A.

    1988-01-01

    The theory and application of a sequential test procedure for fault detection and isolation. The test procedure is suited for development of intelligent instrumentation in strategic processes like aircraft and nuclear plants where redundant measurements are usually available for individual critical variables. The test procedure consists of: (1) a generic redundancy management procedure which is essentially independent of the fault detection strategy and measurement noise statistics, and (2) a modified version of sequential probability ratio test algorithm for fault detection and isolation, which functions within the framework of this redundancy management procedure. The sequential test procedure is suitable for real-time applications using commercially available microcomputers and its efficacy has been verified by online fault detection in an operating nuclear reactor. 15 references

  2. A Fast Kurtogram Demodulation Method in Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Li Li

    2016-01-01

    Full Text Available Targeting at the problem of finding the best demodulation band when applying envelope analysis in rolling bearing fault diagnosis, this paper proposes a novel Fast Kurtogram Demodulation Method (FKDM to solve the problem. FKDM is established based on the theory of spectrum kurtosis and the short-time Fourier Transform. It determines the best demodulation band firstly, which is also known as the central frequency and frequency resolution. Then, the fault signals can be demodulated in the obtained frequency band by using envelope demodulation algorithm. The FKDM method ensures the fault diagnosis correction by solving the problem of demodulation band selection. Applied FKDM in rolling bearing fault diagnosis and compared with conventional envelope analysis, the results demonstrate FKDM can achieve a better performance.

  3. Early detection of incipient faults in power plants using accelerated neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Jayakumar, M.; Atiya, A.

    1992-01-01

    An important aspect of power plant automation is the development of computer systems able to detect and isolate incipient (slowly developing) faults at the earliest possible stages of their occurrence. In this paper, the development and testing of such a fault detection scheme is presented based on recognition of sensor signatures during various failure modes. An accelerated learning algorithm, namely adaptive backpropagation (ABP), has been developed that allows the training of a multilayer perceptron (MLP) network to a high degree of accuracy, with an order of magnitude improvement in convergence speed. An artificial neural network (ANN) has been successfully trained using the ABP algorithm, and it has been extensively tested with simulated data to detect and classify incipient faults of various types and severity and in the presence of varying sensor noise levels

  4. Characteristics of earth faults in power systems with a compensated or an unearthed neutral

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S; Lehtonen, M [VTT Energy, Espoo (Finland); Antila, E [ABB Transmit Oy (Finland); Stroem, J [Espoo Electricity Co (Finland); Ingman, S [Vaasa Electricity Co (Finland)

    1998-08-01

    The most common fault type in the electrical distribution networks is the single phase to earth fault. For instance in the Nordic countries, about 80 % of all faults are of this type. To develop the protection and fault location systems, it is important to obtain real case data of disturbances and faults which occurred in the networks. Therefore, data of fault occurrences have been recorded and analyzed in the medium voltage distribution networks (20 kV) at two substations, of which one has an isolated and the other a compensated neutral. In the occurring disturbances, the traces of phase currents and neutral currents in the beginning of two feeder and the traces of phase voltages and neutral voltage from the voltage measuring bay were recorded. In addition to the measured data, other information of the fault occurrences was also collected (data of the line, cause and location of permanent faults and so on)

  5. Methodology for the location diagnosis of electrical faults in electric power systems; Metodologia para el diagnostico de ubicacion de fallas en sistema electricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Rosas Molina, Ricardo

    2008-08-15

    The constant growth of the Electric Power Systems derived from the increase in the world-wide demand of energy, has brought as a consequence a greater complexity in the operation and control of the power nets. One of the most affected tasks by this situation is the operation of electrical systems against the presence of faults, where the first task to realize is, on the part of the operational personnel of the network, the rapid fault site location within the system. In the present paper the problem of the diagnose location of electrical faults in power systems is approached, from the point of view of the operators of the energy control centers of an electric company. The objective of this thesis work is to describe a methodology of operational analysis of protections, as a bases for the development of a system of diagnosis systems for faults location, that allows to consider the possible fault sites within the system as well as a justification of the operation of protections in face of a disturbance as a support to the operators of the Energy Control centers. The methodology is designed to use different information types, discreet, continuous and controls. Nevertheless, in the development of the present stage of the proposed methodology use is made exclusively of the discreet information of the conditions of breakers and operation of relays, as well as of the connectivity of the network elements. The analysis methodology consists in determining the network elements where the fault could have occurred, using the protections coverage areas associated to the operated circuit breakers. Later, these fault alternatives become ordained in descendent form of possibility using classification indexes and analyses based on fuzzy logic. [Spanish] El constante crecimiento de los Sistemas Electricos de Potencia derivado del incremento en la demanda energetica mundial, ha traido como consecuencia una mayor complejidad en la operacion y control de las redes electricas. Una de las

  6. Distance to faults as a proxy for radon gas concentration in dwellings.

    Science.gov (United States)

    Drolet, Jean-Philippe; Martel, Richard

    2016-02-01

    This research was done to demonstrate the usefulness of the local structural geology characteristics to predict indoor radon concentrations. The presence of geologic faults near dwellings increases the vulnerability of the dwellings to elevated indoor radon by providing favorable pathways from the source uranium-rich bedrock units to the surface. Kruskal-Wallis one-way analyses of variance by ranks were used to determine the distance where faults have statistically significant influence on indoor radon concentrations. The great-circle distance between the 640 spatially referenced basement radon concentration measurements and the nearest fault was calculated using the Haversine formula and the spherical law of cosines. It was shown that dwellings located less than 150 m from a major fault had a higher radon potential. The 150 m threshold was determined using Kruskal-Wallis ANOVA on: (1) all the basement radon measurements dataset and; (2) the basement radon measurements located on uranium-rich bedrock units only. The results indicated that 22.8% of the dwellings located less than 150 m from a fault exceeded the Canadian radon guideline of 200 Bq/m(3) when using all the basement radon measurements dataset. This percentage fell to 15.2% for the dwellings located between 150 m and 700 m from a fault. When using only the basement radon measurements located on uranium-rich bedrock units, these percentages were 30.7% (0-150 m) and 17.5% (150 m-700 m). The assessment and management of risk can be improved where structural geology characteristics base maps are available by using this proxy indicator. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Optimized Neural Network for Fault Diagnosis and Classification

    International Nuclear Information System (INIS)

    Elaraby, S.M.

    2005-01-01

    This paper presents a developed and implemented toolbox for optimizing neural network structure of fault diagnosis and classification. Evolutionary algorithm based on hierarchical genetic algorithm structure is used for optimization. The simplest feed-forward neural network architecture is selected. Developed toolbox has friendly user interface. Multiple solutions are generated. The performance and applicability of the proposed toolbox is verified with benchmark data patterns and accident diagnosis of Egyptian Second research reactor (ETRR-2)

  8. Development and Evaluation of Fault-Tolerant Flight Control Systems

    Science.gov (United States)

    Song, Yong D.; Gupta, Kajal (Technical Monitor)

    2004-01-01

    The research is concerned with developing a new approach to enhancing fault tolerance of flight control systems. The original motivation for fault-tolerant control comes from the need for safe operation of control elements (e.g. actuators) in the event of hardware failures in high reliability systems. One such example is modem space vehicle subjected to actuator/sensor impairments. A major task in flight control is to revise the control policy to balance impairment detectability and to achieve sufficient robustness. This involves careful selection of types and parameters of the controllers and the impairment detecting filters used. It also involves a decision, upon the identification of some failures, on whether and how a control reconfiguration should take place in order to maintain a certain system performance level. In this project new flight dynamic model under uncertain flight conditions is considered, in which the effects of both ramp and jump faults are reflected. Stabilization algorithms based on neural network and adaptive method are derived. The control algorithms are shown to be effective in dealing with uncertain dynamics due to external disturbances and unpredictable faults. The overall strategy is easy to set up and the computation involved is much less as compared with other strategies. Computer simulation software is developed. A serious of simulation studies have been conducted with varying flight conditions.

  9. Fault Activity Aware Service Delivery in Wireless Sensor Networks for Smart Cities

    Directory of Open Access Journals (Sweden)

    Xiaomei Zhang

    2017-01-01

    Full Text Available Wireless sensor networks (WSNs are increasingly used in smart cities which involve multiple city services having quality of service (QoS requirements. When misbehaving devices exist, the performance of current delivery protocols degrades significantly. Nonetheless, the majority of existing schemes either ignore the faulty behaviors’ variability and time-variance in city environments or focus on homogeneous traffic for traditional data services (simple text messages rather than city services (health care units, traffic monitors, and video surveillance. We consider the problem of fault-aware multiservice delivery, in which the network performs secure routing and rate control in terms of fault activity dynamic metric. To this end, we first design a distributed framework to estimate the fault activity information based on the effects of nondeterministic faulty behaviors and to incorporate these estimates into the service delivery. Then we present a fault activity geographic opportunistic routing (FAGOR algorithm addressing a wide range of misbehaviors. We develop a leaky-hop model and design a fault activity rate-control algorithm for heterogeneous traffic to allocate resources, while guaranteeing utility fairness among multiple city services. Finally, we demonstrate the significant performance of our scheme in routing performance, effective utility, and utility fairness in the presence of misbehaving sensors through extensive simulations.

  10. Group method of data handling and neral networks applied in monitoring and fault detection in sensors in nuclear power plants

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio

    2011-01-01

    The increasing demand in the complexity, efficiency and reliability in modern industrial systems stimulated studies on control theory applied to the development of Monitoring and Fault Detection system. In this work a new Monitoring and Fault Detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and Artificial Neural Networks (ANNs) which was applied to the IEA-R1 research reactor at IPEN. The Monitoring and Fault Detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second part to the process information using ANNs. The GMDH algorithm was used in two different ways: firstly, the GMDH algorithm was used to generate a better database estimated, called matrix z , which was used to train the ANNs. After that, the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one Theoretical Model and four Models using different sets of reactor variables. After an exhausting study dedicated to the sensors Monitoring, the Fault Detection in sensors was developed by simulating faults in the sensors database using values of 5%, 10%, 15% and 20% in these sensors database. The results obtained using GMDH algorithm in the choice of the best input variables to the ANNs were better than that using only ANNs, thus making possible the use of these methods in the implementation of a new Monitoring and Fault Detection methodology applied in sensors. (author)

  11. Heterogeneity in the Fault Damage Zone: a Field Study on the Borrego Fault, B.C., Mexico

    Science.gov (United States)

    Ostermeijer, G.; Mitchell, T. M.; Dorsey, M. T.; Browning, J.; Rockwell, T. K.; Aben, F. M.; Fletcher, J. M.; Brantut, N.

    2017-12-01

    The nature and distribution of damage around faults, and its impacts on fault zone properties has been a hot topic of research over the past decade. Understanding the mechanisms that control the formation of off fault damage can shed light on the processes during the seismic cycle, and the nature of fault zone development. Recent published work has identified three broad zones of damage around most faults based on the type, intensity, and extent of fracturing; Tip, Wall, and Linking damage. Although these zones are able to adequately characterise the general distribution of damage, little has been done to identify the nature of damage heterogeneity within those zones, often simplifying the distribution to fit log-normal linear decay trends. Here, we attempt to characterise the distribution of fractures that make up the wall damage around seismogenic faults. To do so, we investigate an extensive two dimensional fracture network exposed on a river cut platform along the Borrego Fault, BC, Mexico, 5m wide, and extending 20m from the fault core into the damage zone. High resolution fracture mapping of the outcrop, covering scales ranging three orders of magnitude (cm to m), has allowed for detailed observations of the 2D damage distribution within the fault damage zone. Damage profiles were obtained along several 1D transects perpendicular to the fault and micro-damage was examined from thin-sections at various locations around the outcrop for comparison. Analysis of the resulting fracture network indicates heterogeneities in damage intensity at decimetre scales resulting from a patchy distribution of high and low intensity corridors and clusters. Such patchiness may contribute to inconsistencies in damage zone widths defined along 1D transects and the observed variability of fracture densities around decay trends. How this distribution develops with fault maturity and the scaling of heterogeneities above and below the observed range will likely play a key role in

  12. Looking for Off-Fault Deformation and Measuring Strain Accumulation During the Past 70 years on a Portion of the Locked San Andreas Fault

    Science.gov (United States)

    Vadman, M.; Bemis, S. P.

    2017-12-01

    Even at high tectonic rates, detection of possible off-fault plastic/aseismic deformation and variability in far-field strain accumulation requires high spatial resolution data and likely decades of measurements. Due to the influence that variability in interseismic deformation could have on the timing, size, and location of future earthquakes and the calculation of modern geodetic estimates of strain, we attempt to use historical aerial photographs to constrain deformation through time across a locked fault. Modern photo-based 3D reconstruction techniques facilitate the creation of dense point clouds from historical aerial photograph collections. We use these tools to generate a time series of high-resolution point clouds that span 10-20 km across the Carrizo Plain segment of the San Andreas fault. We chose this location due to the high tectonic rates along the San Andreas fault and lack of vegetation, which may obscure tectonic signals. We use ground control points collected with differential GPS to establish scale and georeference the aerial photograph-derived point clouds. With a locked fault assumption, point clouds can be co-registered (to one another and/or the 1.7 km wide B4 airborne lidar dataset) along the fault trace to calculate relative displacements away from the fault. We use CloudCompare to compute 3D surface displacements, which reflect the interseismic strain accumulation that occurred in the time interval between photo collections. As expected, we do not observe clear surface displacements along the primary fault trace in our comparisons of the B4 lidar data against the aerial photograph-derived point clouds. However, there may be small scale variations within the lidar swath area that represent near-fault plastic deformation. With large-scale historical photographs available for the Carrizo Plain extending back to at least the 1940s, we can potentially sample nearly half the interseismic period since the last major earthquake on this portion of

  13. Synthesis of Fault-Tolerant Schedules with Transparency/Performance Trade-offs for Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2006-01-01

    of the application. We propose a novel algorithm for the synthesis of fault-tolerant schedules that can handle the transparency/performance trade-offs imposed by the designer, and makes use of the fault-occurrence information to reduce the overhead due to fault tolerance. We model the application as a conditional...... process graph, where the fault occurrence information is represented as conditional edges and the transparent recovery is captured using synchronization nodes....... such that the operation of other processes is not affected, we call it transparent recovery. Although transparent recovery has the advantages of fault containment, improved debugability and less memory needed to store the fault-tolerant schedules, it will introduce delays that can violate the timing constraints...

  14. Methods for locating ground faults and insulation degradation condition in energy conversion systems

    Science.gov (United States)

    Agamy, Mohamed; Elasser, Ahmed; Galbraith, Anthony William; Harfman Todorovic, Maja

    2015-08-11

    Methods for determining a ground fault or insulation degradation condition within energy conversion systems are described. A method for determining a ground fault within an energy conversion system may include, in part, a comparison of baseline waveform of differential current to a waveform of differential current during operation for a plurality of DC current carrying conductors in an energy conversion system. A method for determining insulation degradation within an energy conversion system may include, in part, a comparison of baseline frequency spectra of differential current to a frequency spectra of differential current transient at start-up for a plurality of DC current carrying conductors in an energy conversion system. In one embodiment, the energy conversion system may be a photovoltaic system.

  15. Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line

    Science.gov (United States)

    Malla, S. G.; Bhende, C. N.

    2014-10-01

    The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

  16. An adaptive algorithm for performance assessment of construction project management with respect to resilience engineering and job security

    Directory of Open Access Journals (Sweden)

    P. Hashemi

    2018-01-01

    Full Text Available Construction sites are accident-prone locations and therefore safety management plays an im-portant role in these workplaces. This study presents an adaptive algorithm for performance as-sessment of project management with respect to resilience engineering and job security in a large construction site. The required data are collected using questionnaires in a large construction site. The presented algorithm is composed of radial basis function (RBF, artificial neural networks multi-layer perceptron (ANN-MLP, and statistical tests. The results indicate that preparedness, fault-tolerance, and flexibility are the most effective factors on overall efficiency. Moreover, job security and resilience engineering have similar statistical impacts on overall system efficiency. The results are verified and validated by the proposed algorithm.

  17. Design of passive fault-tolerant controllers of a quadrotor based on sliding mode theory

    Directory of Open Access Journals (Sweden)

    Merheb Abdel-Razzak

    2015-09-01

    Full Text Available Abstract In this paper, sliding mode control is used to develop two passive fault tolerant controllers for an AscTec Pelican UAV quadrotor. In the first approach, a regular sliding mode controller (SMC augmented with an integrator uses the robustness property of variable structure control to tolerate partial actuator faults. The second approach is a cascaded sliding mode controller with an inner and outer SMC loops. In this configuration, faults are tolerated in the fast inner loop controlling the velocity system. Tuning the controllers to find the optimal values of the sliding mode controller gains is made using the ecological systems algorithm (ESA, a biologically inspired stochastic search algorithm based on the natural equilibrium of animal species. The controllers are tested using SIMULINK in the presence of two different types of actuator faults, partial loss of motor power affecting all the motors at once, and partial loss of motor speed. Results of the quadrotor following a continuous path demonstrated the effectiveness of the controllers, which are able to tolerate a significant number of actuator faults despite the lack of hardware redundancy in the quadrotor system. Tuning the controller using a faulty system improves further its ability to afford more severe faults. Simulation results show that passive schemes reserve their important role in fault tolerant control and are complementary to active techniques

  18. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  19. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B [Instrument Department, College of Mechatronics Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)

    2006-10-15

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily.

  20. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    International Nuclear Information System (INIS)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B

    2006-01-01

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily

  1. Application of dynamic uncertain causality graph in spacecraft fault diagnosis: Logic cycle

    Science.gov (United States)

    Yao, Quanying; Zhang, Qin; Liu, Peng; Yang, Ping; Zhu, Ma; Wang, Xiaochen

    2017-04-01

    Intelligent diagnosis system are applied to fault diagnosis in spacecraft. Dynamic Uncertain Causality Graph (DUCG) is a new probability graphic model with many advantages. In the knowledge expression of spacecraft fault diagnosis, feedback among variables is frequently encountered, which may cause directed cyclic graphs (DCGs). Probabilistic graphical models (PGMs) such as bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning, but BN does not allow DCGs. In this paper, DUGG is applied to fault diagnosis in spacecraft: introducing the inference algorithm for the DUCG to deal with feedback. Now, DUCG has been tested in 16 typical faults with 100% diagnosis accuracy.

  2. An Improved Genetic Algorithm for Optimal Stationary Energy Storage System Locating and Sizing

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-10-01

    Full Text Available The application of a stationary ultra-capacitor energy storage system (ESS in urban rail transit allows for the recuperation of vehicle braking energy for increasing energy savings as well as for a better vehicle voltage profile. This paper aims to obtain the best energy savings and voltage profile by optimizing the location and size of ultra-capacitors. This paper firstly raises the optimization objective functions from the perspectives of energy savings, regenerative braking cancellation and installation cost, respectively. Then, proper mathematical models of the DC (direct current traction power supply system are established to simulate the electrical load-flow of the traction supply network, and the optimization objections are evaluated in the example of a Chinese metro line. Ultimately, a methodology for optimal ultra-capacitor energy storage system locating and sizing is put forward based on the improved genetic algorithm. The optimized result shows that certain preferable and compromised schemes of ESSs’ location and size can be obtained, acting as a compromise between satisfying better energy savings, voltage profile and lower installation cost.

  3. Latest Progress of Fault Detection and Localization in Complex Electrical Engineering

    Science.gov (United States)

    Zhao, Zheng; Wang, Can; Zhang, Yagang; Sun, Yi

    2014-01-01

    In the researches of complex electrical engineering, efficient fault detection and localization schemes are essential to quickly detect and locate faults so that appropriate and timely corrective mitigating and maintenance actions can be taken. In this paper, under the current measurement precision of PMU, we will put forward a new type of fault detection and localization technology based on fault factor feature extraction. Lots of simulating experiments indicate that, although there are disturbances of white Gaussian stochastic noise, based on fault factor feature extraction principal, the fault detection and localization results are still accurate and reliable, which also identifies that the fault detection and localization technology has strong anti-interference ability and great redundancy.

  4. Orion GN&C Fault Management System Verification: Scope And Methodology

    Science.gov (United States)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  5. Actuator Fault Diagnosis in a Boeing 747 Model via Adaptive Modified Two-Stage Kalman Filter

    Directory of Open Access Journals (Sweden)

    Fikret Caliskan

    2014-01-01

    Full Text Available An adaptive modified two-stage linear Kalman filtering algorithm is utilized to identify the loss of control effectiveness and the magnitude of low degree of stuck faults in a closed-loop nonlinear B747 aircraft. Control effectiveness factors and stuck magnitudes are used to quantify faults entering control systems through actuators. Pseudorandom excitation inputs are used to help distinguish partial loss and stuck faults. The partial loss and stuck faults in the stabilizer are isolated and identified successfully.

  6. Imaging Shear Strength Along Subduction Faults

    Science.gov (United States)

    Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.

    2017-11-01

    Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

  7. Imaging shear strength along subduction faults

    Science.gov (United States)

    Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.

    2017-01-01

    Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

  8. A Rolling Element Bearing Fault Diagnosis Approach Based on Multifractal Theory and Gray Relation Theory.

    Science.gov (United States)

    Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying

    2016-01-01

    Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately.

  9. Fault Detection, Isolation, and Accommodation for LTI Systems Based on GIMC Structure

    Directory of Open Access Journals (Sweden)

    D. U. Campos-Delgado

    2008-01-01

    Full Text Available In this contribution, an active fault-tolerant scheme that achieves fault detection, isolation, and accommodation is developed for LTI systems. Faults and perturbations are considered as additive signals that modify the state or output equations. The accommodation scheme is based on the generalized internal model control architecture recently proposed for fault-tolerant control. In order to improve the performance after a fault, the compensation is considered in two steps according with a fault detection and isolation algorithm. After a fault scenario is detected, a general fault compensator is activated. Finally, once the fault is isolated, a specific compensator is introduced. In this setup, multiple faults could be treated simultaneously since their effect is additive. Design strategies for a nominal condition and under model uncertainty are presented in the paper. In addition, performance indices are also introduced to evaluate the resulting fault-tolerant scheme for detection, isolation, and accommodation. Hard thresholds are suggested for detection and isolation purposes, meanwhile, adaptive ones are considered under model uncertainty to reduce the conservativeness. A complete simulation evaluation is carried out for a DC motor setup.

  10. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    International Nuclear Information System (INIS)

    Jiang Li; Shi Tielin; Xuan Jianping

    2012-01-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  11. Subsidence and Fault Displacement Along the Long Point Fault Derived from Continuous GPS Observations (2012-2017)

    Science.gov (United States)

    Tsibanos, V.; Wang, G.

    2017-12-01

    The Long Point Fault located in Houston Texas is a complex system of normal faults which causes significant damage to urban infrastructure on both private and public property. This case study focuses on the 20-km long fault using high accuracy continuously operating global positioning satellite (GPS) stations to delineate fault movement over five years (2012 - 2017). The Long Point Fault is the longest active fault in the greater Houston area that damages roads, buried pipes, concrete structures and buildings and creates a financial burden for the city of Houston and the residents who live in close vicinity to the fault trace. In order to monitor fault displacement along the surface 11 permanent and continuously operating GPS stations were installed 6 on the hanging wall and 5 on the footwall. This study is an overview of the GPS observations from 2013 to 2017. GPS positions were processed with both relative (double differencing) and absolute Precise Point Positioning (PPP) techniques. The PPP solutions that are referred to IGS08 reference frame were transformed to the Stable Houston Reference Frame (SHRF16). Our results show no considerable horizontal displacements across the fault, but do show uneven vertical displacement attributed to regional subsidence in the range of (5 - 10 mm/yr). This subsidence can be associated to compaction of silty clays in the Chicot and Evangeline aquifers whose water depths are approximately 50m and 80m below the land surface (bls). These levels are below the regional pre-consolidation head that is about 30 to 40m bls. Recent research indicates subsidence will continue to occur until the aquifer levels reach the pre-consolidation head. With further GPS observations both the Long Point Fault and regional land subsidence can be monitored providing important geological data to the Houston community.

  12. Fault Activity in the Terrebonne Trough, Southeastern Louisiana: A Continuation of Salt-Withdrawal Fault Activity from the Miocene into the late Quaternary and Implication for Subsidence Hot-Spots

    Science.gov (United States)

    Akintomide, A. O.; Dawers, N. H.

    2017-12-01

    The observed displacement along faults in southeastern Louisiana has raised questions about the kinematic history of faults during the Quaternary. The Terrebonne Trough, a Miocene salt withdrawal basin, is bounded by the Golden Meadow fault zone on its northern boundary; north dipping, so-called counter-regional faults, together with a subsurface salt ridge, define its southern boundary. To date, there are relatively few published studies on fault architecture and kinematics in the onshore area of southeastern Louisiana. The only publically accessible studies, based on 2d seismic reflection profiles, interpreted faults as mainly striking east-west. Our interpretation of a 3-D seismic reflection volume, located in the northwestern Terrebonne Trough, as well as industry well log correlations define a more complex and highly-segmented fault architecture. The northwest striking Lake Boudreaux fault bounds a marsh on the upthrown block from Lake Boudreaux on the downthrown block. To the east, east-west striking faults are located at the Montegut marsh break and north of Isle de Jean Charles. Portions of the Lake Boudreaux and Isle de Jean Charles faults serve as the northern boundary of the Madison Bay subsidence hot-spot. All three major faults extend to the top of the 3d seismic volume, which is inferred to image latest Pleistocene stratigraphy. Well log correlation using 11+ shallow markers across these faults and kinematic techniques such as stratigraphic expansion indices indicate that all three faults were active in the middle(?) and late Pleistocene. Based on expansion indices, both the Montegut and Isle de Jean Charles faults were active simultaneously at various times, but with different slip rates. There are also time intervals when the Lake Boudreaux fault was slipping at a faster rate compared to the east-west striking faults. Smaller faults near the margins of the 3d volume appear to relate to nearby salt stocks, Bully Camp and Lake Barre. Our work to date

  13. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  14. Efficient reduction and modularization for large fault trees stored by pages

    International Nuclear Information System (INIS)

    Chen, Shanqi; Wang, Jin; Wang, Jiaqun; Wang, Fang; Hu, Liqin

    2016-01-01

    Highlights: • New fault tree pre-processing methods used in RiskA are presented. • Including the fault tree paging storage, simplification and modularization. • For getting MCS for fault trees containing more than 10,000 gates and events. • Reduce computer resources needs (RAM) and improve computation speed. - Abstract: Fault Tree Analysis (FTA), an indispensable tool used in Probabilistic Risk Assessment (PRA), has been used throughout the commercial nuclear power industry for safety and reliability analyses. However, large fault tree analysis, such as those used in nuclear power plant requires significant computer resources, which makes the analysis of PRA model inefficient and time consuming. This paper describes a fault tree pre-processing method used in the reliability and probabilistic safety assessment program RiskA that is capable of generating minimal cutsets for fault trees containing more than 10,000 gates and basic events. The novel feature of this method is not only that Boolean reduction rules are used but also that a new objective of simplification is proposed. Moreover, since the method aims to find more fault tree modules by the linear-time algorithm, it can optimize fault tree modularization, which further reduces the computational time of large fault tree analysis.

  15. Fault Severity Estimation of Rotating Machinery Based on Residual Signals

    Directory of Open Access Journals (Sweden)

    Fan Jiang

    2012-01-01

    Full Text Available Fault severity estimation is an important part of a condition-based maintenance system, which can monitor the performance of an operation machine and enhance its level of safety. In this paper, a novel method based on statistical property and residual signals is developed for estimating the fault severity of rotating machinery. The fast Fourier transformation (FFT is applied to extract the so-called multifrequency-band energy (MFBE from the vibration signals of rotating machinery with different fault severity levels in the first stage. Usually these features of the working conditions with different fault sensitivities are different. Therefore a sensitive features-selecting algorithm is defined to construct the feature matrix and calculate the statistic parameter (mean in the second stage. In the last stage, the residual signals computed by the zero space vector are used to estimate the fault severity. Simulation and experimental results reveal that the proposed method based on statistics and residual signals is effective and feasible for estimating the severity of a rotating machine fault.

  16. Designing Fault-Injection Experiments for the Reliability of Embedded Systems

    Science.gov (United States)

    White, Allan L.

    2012-01-01

    This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.

  17. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    Full Text Available Recent deformation processes taking place in real time are analyzed on the basis of data on fault zones which were collected by long-term detailed geodetic survey studies with application of field methods and satellite monitoring.A new category of recent crustal movements is described and termed as parametrically induced tectonic strain in fault zones. It is shown that in the fault zones located in seismically active and aseismic regions, super intensive displacements of the crust (5 to 7 cm per year, i.e. (5 to 7·10–5 per year occur due to very small external impacts of natural or technogenic / industrial origin.The spatial discreteness of anomalous deformation processes is established along the strike of the regional Rechitsky fault in the Pripyat basin. It is concluded that recent anomalous activity of the fault zones needs to be taken into account in defining regional regularities of geodynamic processes on the basis of real-time measurements.The paper presents results of analyses of data collected by long-term (20 to 50 years geodetic surveys in highly seismically active regions of Kopetdag, Kamchatka and California. It is evidenced by instrumental geodetic measurements of recent vertical and horizontal displacements in fault zones that deformations are ‘paradoxically’ deviating from the inherited movements of the past geological periods.In terms of the recent geodynamics, the ‘paradoxes’ of high and low strain velocities are related to a reliable empirical fact of the presence of extremely high local velocities of deformations in the fault zones (about 10–5 per year and above, which take place at the background of slow regional deformations which velocities are lower by the order of 2 to 3. Very low average annual velocities of horizontal deformation are recorded in the seismic regions of Kopetdag and Kamchatka and in the San Andreas fault zone; they amount to only 3 to 5 amplitudes of the earth tidal deformations per year.A ‘fault

  18. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography

    Science.gov (United States)

    Kim, D.; Youn, J.; Kim, C.

    2017-08-01

    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  19. AUTOMATIC FAULT RECOGNITION OF PHOTOVOLTAIC MODULES BASED ON STATISTICAL ANALYSIS OF UAV THERMOGRAPHY

    Directory of Open Access Journals (Sweden)

    D. Kim

    2017-08-01

    Full Text Available As a malfunctioning PV (Photovoltaic cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle. The proposed algorithm uses statistical analysis of thermal intensity (surface temperature characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  20. dc Arc Fault Effect on Hybrid ac/dc Microgrid

    Science.gov (United States)

    Fatima, Zahra

    The advent of distributed energy resources (DER) and reliability and stability problems of the conventional grid system has given rise to the wide spread deployment of microgrids. Microgrids provide many advantages by incorporating renewable energy sources and increasing the reliability of the grid by isolating from the main grid in case of an outage. AC microgrids have been installed all over the world, but dc microgrids have been gaining interest due to the advantages they provide over ac microgrids. However the entire power network backbone is still ac and dc microgrids require expensive converters to connect to the ac power network. As a result hybrid ac/dc microgrids are gaining more attention as it combines the advantages of both ac and dc microgrids such as direct integration of ac and dc systems with minimum number of conversions which increases the efficiency by reducing energy losses. Although dc electric systems offer many advantages such as no synchronization and no reactive power, successful implementation of dc systems requires appropriate protection strategies. One unique protection challenge brought by the dc systems is dc arc faults. A dc arc fault is generated when there is a gap in the conductor due to insulation degradation and current is used to bridge the gap, resulting in an arc with very high temperature. Such a fault if it goes undetected and is not extinguished can cause damage to the entire system and cause fires. The purpose of the research is to study the effect of the dc arc fault at different locations in the hybrid ac/dc microgrid and provide insight on the reliability of the grid components when it is impacted by arc faults at various locations in the grid. The impact of dc arc fault at different locations on the performance of the PV array, wind generation, and constant power loads (CPL) interfaced with dc/dc converters is studied. MATLAB/Simulink is used to model the hybrid ac/dc microgrid and arc fault.