WorldWideScience

Sample records for fault location algorithm

  1. Fault location algorithms for optical networks

    OpenAIRE

    Mas Machuca, Carmen; Thiran, Patrick

    2005-01-01

    Today, there is no doubt that optical networks are the solution to the explosion of Internet traffic that two decades ago we only dreamed about. They offer high capacity with the use of Wavelength Division Multiplexing (WDM) techniques among others. However, this increase of available capacity can be betrayed by the high quantity of information that can be lost when a failure occurs because not only one, but several channels will then be interrupted. Efficient fault detection and location mec...

  2. Modern optimization algorithms for fault location estimation in power systems

    Directory of Open Access Journals (Sweden)

    A. Sanad Ahmed

    2017-10-01

    Full Text Available This paper presents a fault location estimation approach in two terminal transmission lines using Teaching Learning Based Optimization (TLBO technique, and Harmony Search (HS technique. Also, previous methods were discussed such as Genetic Algorithm (GA, Artificial Bee Colony (ABC, Artificial neural networks (ANN and Cause & effect (C&E with discussing advantages and disadvantages of all methods. Initial data for proposed techniques are post-fault measured voltages and currents from both ends, along with line parameters as initial inputs as well. This paper deals with several types of faults, L-L-L, L-L-L-G, L-L-G and L-G. Simulation of the model was performed on SIMULINK by extracting initial inputs from SIMULINK to MATLAB, where the objective function specifies the fault location with a very high accuracy, precision and within a very short time. Future works are discussed showing the benefit behind using the Differential Learning TLBO (DLTLBO was discussed as well.

  3. A Hybrid Algorithm for Fault Locating in Looped Microgrids

    DEFF Research Database (Denmark)

    Beheshtaein, Siavash; Savaghebi, Mehdi; Quintero, Juan Carlos Vasquez

    2016-01-01

    Protection is the last obstacle to realizing the idea of microgrid. Some of the main challenges in microgrid protection include topology changes of microgrid, week-infeed fault, bidirectional power flow effects, blinding of the protection, sympathetic tripping, high impedance fault, and low voltage...... ride through (LVRT). Besides these challenges, it is desired to eliminate the relays for distribution lines and locate faults based on distributed generations (DGs) voltage or current. On the other hands increasing in the number of DGs and lines would result in high computation burden and degradation...

  4. A new and accurate fault location algorithm for combined transmission lines using Adaptive Network-Based Fuzzy Inference System

    Energy Technology Data Exchange (ETDEWEB)

    Sadeh, Javad; Afradi, Hamid [Electrical Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad, P.O. Box: 91775-1111, Mashhad (Iran)

    2009-11-15

    This paper presents a new and accurate algorithm for locating faults in a combined overhead transmission line with underground power cable using Adaptive Network-Based Fuzzy Inference System (ANFIS). The proposed method uses 10 ANFIS networks and consists of 3 stages, including fault type classification, faulty section detection and exact fault location. In the first part, an ANFIS is used to determine the fault type, applying four inputs, i.e., fundamental component of three phase currents and zero sequence current. Another ANFIS network is used to detect the faulty section, whether the fault is on the overhead line or on the underground cable. Other eight ANFIS networks are utilized to pinpoint the faults (two for each fault type). Four inputs, i.e., the dc component of the current, fundamental frequency of the voltage and current and the angle between them, are used to train the neuro-fuzzy inference systems in order to accurately locate the faults on each part of the combined line. The proposed method is evaluated under different fault conditions such as different fault locations, different fault inception angles and different fault resistances. Simulation results confirm that the proposed method can be used as an efficient means for accurate fault location on the combined transmission lines. (author)

  5. Accurate fault location algorithm on power transmission lines with use of two-end unsynchronized measurements

    Directory of Open Access Journals (Sweden)

    Mohamed Dine

    2012-01-01

    Full Text Available This paper presents a new approach to fault location on power transmission lines. This approach uses two-end unsynchronised measurements of the line and benefits from the advantages of digital technology and numerical relaying, which are available today and can easily be applied for off-line analysis. The approach is to modify the apparent impedance method using a very simple first-order formula. The new method is independent of fault resistance, source impedances and pre-fault currents. In addition, the data volume communicated between relays is sufficiently small enough to be transmitted easily using a digital protection channel. The proposed approach is tested via digital simulation using MATLand the applied test results corroborate the superior performance of the proposed approach.

  6. k-means algorithm and mixture distributions for locating faults in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Mora-Florez, J. [The Technological University of Pereira, La Julita, Ciudad Universitaria, Pereira, Risaralda (Colombia); Cormane-Angarita, J.; Ordonez-Plata, G. [The Industrial University of Santander (Colombia)

    2009-05-15

    Enhancement of power distribution system reliability requires of a considerable investment in studies and equipment, however, not all the utilities have the capability to spend time and money to assume it. Therefore, any strategy that allows the improvement of reliability should be reflected directly in the reduction of the duration and frequency interruption indexes (SAIFI and SAIDI). In this paper, an alternative solution to the problem of power service continuity associated to fault location is presented. A methodology of statistical nature based on finite mixtures is proposed. A statistical model is obtained from the extraction of the magnitude of the voltage sag registered during a fault event, along with the network parameters and topology. The objective is to offer an economic alternative of easy implementation for the development of strategies oriented to improve the reliability from the reduction of the restoration times in power distribution systems. In the application case for an application example in a power distribution system, the faulted zones were identified, having low error rates. (author)

  7. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    Science.gov (United States)

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  8. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    A. H. Al-Mohammed

    2014-01-01

    Full Text Available This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs, when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research.

  9. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  10. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  11. Application of ant colony optimization in NPP classification fault location

    International Nuclear Information System (INIS)

    Xie Chunli; Liu Yongkuo; Xia Hong

    2009-01-01

    Nuclear Power Plant is a highly complex structural system with high safety requirements. Fault location appears to be particularly important to enhance its safety. Ant Colony Optimization is a new type of optimization algorithm, which is used in the fault location and classification of nuclear power plants in this paper. Taking the main coolant system of the first loop as the study object, using VB6.0 programming technology, the NPP fault location system is designed, and is tested against the related data in the literature. Test results show that the ant colony optimization can be used in the accurate classification fault location in the nuclear power plants. (authors)

  12. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy, Helsinki (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1996-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  13. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1998-08-01

    In this chapter, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerized relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  14. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy, Helsinki (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1997-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  15. Fault location in underground cables using ANFIS nets and discrete wavelet transform

    Directory of Open Access Journals (Sweden)

    Shimaa Barakat

    2014-12-01

    Full Text Available This paper presents an accurate algorithm for locating faults in a medium voltage underground power cable using a combination of Adaptive Network-Based Fuzzy Inference System (ANFIS and discrete wavelet transform (DWT. The proposed method uses five ANFIS networks and consists of 2 stages, including fault type classification and exact fault location. In the first part, an ANFIS is used to determine the fault type, applying four inputs, i.e., the maximum detailed energy of three phase and zero sequence currents. Other four ANFIS networks are utilized to pinpoint the faults (one for each fault type. Four inputs, i.e., the maximum detailed energy of three phase and zero sequence currents, are used to train the neuro-fuzzy inference systems in order to accurately locate the faults on the cable. The proposed method is evaluated under different fault conditions such as different fault locations, different fault inception angles and different fault resistances.

  16. Comparing Different Fault Identification Algorithms in Distributed Power System

    Science.gov (United States)

    Alkaabi, Salim

    A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.

  17. Fault locator of an allyl chloride plant

    Directory of Open Access Journals (Sweden)

    Savković-Stevanović Jelenka B.

    2004-01-01

    Full Text Available Process safety analysis, which includes qualitative fault event identification, the relative frequency and event probability functions, as well as consequence analysis, was performed on an allye chloride plant. An event tree for fault diagnosis and cognitive reliability analysis, as well as a troubleshooting system, were developed. Fuzzy inductive reasoning illustrated the advantages compared to crisp inductive reasoning. A qualitative model forecast the future behavior of the system in the case of accident detection and then compared it with the actual measured data. A cognitive model including qualitative and quantitative information by fuzzy logic of the incident scenario was derived as a fault locator for an ally! chloride plant. The obtained results showed the successful application of cognitive dispersion modeling to process safety analysis. A fuzzy inductive reasoner illustrated good performance to discriminate between different types of malfunctions. This fault locator allowed risk analysis and the construction of a fault tolerant system. This study is the first report in the literature showing the cognitive reliability analysis method.

  18. Fault Locating, Prediction and Protection (FLPPS)

    Energy Technology Data Exchange (ETDEWEB)

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  19. Statistical Feature Extraction for Fault Locations in Nonintrusive Fault Detection of Low Voltage Distribution Systems

    Directory of Open Access Journals (Sweden)

    Hsueh-Hsien Chang

    2017-04-01

    Full Text Available This paper proposes statistical feature extraction methods combined with artificial intelligence (AI approaches for fault locations in non-intrusive single-line-to-ground fault (SLGF detection of low voltage distribution systems. The input features of the AI algorithms are extracted using statistical moment transformation for reducing the dimensions of the power signature inputs measured by using non-intrusive fault monitoring (NIFM techniques. The data required to develop the network are generated by simulating SLGF using the Electromagnetic Transient Program (EMTP in a test system. To enhance the identification accuracy, these features after normalization are given to AI algorithms for presenting and evaluating in this paper. Different AI techniques are then utilized to compare which identification algorithms are suitable to diagnose the SLGF for various power signatures in a NIFM system. The simulation results show that the proposed method is effective and can identify the fault locations by using non-intrusive monitoring techniques for low voltage distribution systems.

  20. An Algorithm for Fault-Tree Construction

    DEFF Research Database (Denmark)

    Taylor, J. R.

    1982-01-01

    An algorithm for performing certain parts of the fault tree construction process is described. Its input is a flow sheet of the plant, a piping and instrumentation diagram, or a wiring diagram of the circuits, to be analysed, together with a standard library of component functional and failure...

  1. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines.

    Science.gov (United States)

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance.

  2. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  3. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi

    2018-02-12

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a statistical approach. Specifically, a simulation model that mimics the theoretical performances of the inspected PV system is designed. Residuals, which are the difference between the measured and estimated output data, are used as a fault indicator. Indeed, residuals are used as the input for the Multivariate CUmulative SUM (MCUSUM) algorithm to detect potential faults. We evaluated the proposed method by using data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  4. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    Science.gov (United States)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  5. Smart intimation and location of faults in distribution system

    Science.gov (United States)

    Hari Krishna, K.; Srinivasa Rao, B.

    2018-04-01

    Location of faults in the distribution system is one of the most complicated problems that we are facing today. Identification of fault location and severity of fault within a short time is required to provide continuous power supply but fault identification and information transfer to the operator is the biggest challenge in the distribution network. This paper proposes a fault location method in the distribution system based on Arduino nano and GSM module with flame sensor. The main idea is to locate the fault in the distribution transformer by sensing the arc coming out from the fuse element. The biggest challenge in the distribution network is to identify the location and the severity of faults under different conditions. Well operated transmission and distribution systems will play a key role for uninterrupted power supply. Whenever fault occurs in the distribution system the time taken to locate and eliminate the fault has to be reduced. The proposed design was achieved with flame sensor and GSM module. Under faulty condition, the system will automatically send an alert message to the operator in the distribution system, about the abnormal conditions near the transformer, site code and its exact location for possible power restoration.

  6. Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Youssef Menchafou

    2016-03-01

    Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.

  7. Fault Classification and Location in Transmission Lines Using Traveling Waves Modal Components and Continuous Wavelet Transform (CWT

    Directory of Open Access Journals (Sweden)

    Farhad Namdari

    2016-06-01

    Full Text Available Accurate fault classification and localization are the bases of protection for transmission systems. This paper presents a new method for classifying and showing location of faults by travelling waves and modal analysis. In the proposed method, characteristics of different faults are investigated using Clarke transformation and initial current traveling wave; then, appropriate indices are introduced to identify different types of faults. Continuous wavelet transform (CWT is employed to extract information of current and voltage travelling waves. Fault location and classification algorithm is being designed according to wavelet transform coefficients relating to current and voltage modal components. The performance of the proposed method is tested for different fault conditions (different fault distance, different fault resistances, and different fault inception angles by using PSCAD and MATLAB with satisfactory results

  8. Single-phased Fault Location on Transmission Lines Using Unsynchronized Voltages

    Directory of Open Access Journals (Sweden)

    ISTRATE, M.

    2009-10-01

    Full Text Available The increased accuracy into the fault's detection and location makes it easier for maintenance, this being the reason to develop new possibilities for a precise estimation of the fault location. In the field literature, many methods for fault location using voltages and currents measurements at one or both terminals of power grids' lines are presented. The double-end synchronized data algorithms are very precise, but the current transformers can limit the accuracy of these estimations. The paper presents an algorithm to estimate the location of the single-phased faults which uses only voltage measurements at both terminals of the transmission lines by eliminating the error due to current transformers and without introducing the restriction of perfect data synchronization. In such conditions, the algorithm can be used with the actual equipment of the most power grids, the installation of phasor measurement units with GPS system synchronized timer not being compulsory. Only the positive sequence of line parameters and sources are used, thus, eliminating the incertitude in zero sequence parameter estimation. The algorithm is tested using the results of EMTP-ATP simulations, after the validation of the ATP models on the basis of registered results in a real power grid.

  9. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  10. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  11. Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meenakshi Panda

    2014-01-01

    Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.

  12. Automatic identification of otological drilling faults: an intelligent recognition algorithm.

    Science.gov (United States)

    Cao, Tianyang; Li, Xisheng; Gao, Zhiqiang; Feng, Guodong; Shen, Peng

    2010-06-01

    This article presents an intelligent recognition algorithm that can recognize milling states of the otological drill by fusing multi-sensor information. An otological drill was modified by the addition of sensors. The algorithm was designed according to features of the milling process and is composed of a characteristic curve, an adaptive filter and a rule base. The characteristic curve can weaken the impact of the unstable normal milling process and reserve the features of drilling faults. The adaptive filter is capable of suppressing interference in the characteristic curve by fusing multi-sensor information. The rule base can identify drilling faults through the filtering result data. The experiments were repeated on fresh porcine scapulas, including normal milling and two drilling faults. The algorithm has high rates of identification. This study shows that the intelligent recognition algorithm can identify drilling faults under interference conditions. (c) 2010 John Wiley & Sons, Ltd.

  13. Interactive animation of fault-tolerant parallel algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Apgar, S.W.

    1992-02-01

    Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes.

  14. Fault Identification Algorithm Based on Zone-Division Wide Area Protection System

    Directory of Open Access Journals (Sweden)

    Xiaojun Liu

    2014-04-01

    Full Text Available As the power grid becomes more magnified and complicated, wide-area protection system in the practical engineering application is more and more restricted by the communication level. Based on the concept of limitedness of wide-area protection system, the grid with complex structure is divided orderly in this paper, and fault identification and protection action are executed in each divided zone to reduce the pressure of the communication system. In protection zone, a new wide-area protection algorithm based on positive sequence fault components directional comparison principle is proposed. The special associated intelligent electronic devices (IEDs zones which contain buses and transmission lines are created according to the installation location of the IEDs. When a fault occurs, with the help of the fault information collecting and sharing from associated zones with the fault discrimination principle defined in this paper, the IEDs can identify the fault location and remove the fault according to the predetermined action strategy. The algorithm will not be impacted by the load changes and transition resistance and also has good adaptability in open phase running power system. It can be used as a main protection, and it also can be taken into account for the back-up protection function. The results of cases study show that, the division method of the wide-area protection system and the proposed algorithm are effective.

  15. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  16. Fiber fault location utilizing traffic signal in optical network.

    Science.gov (United States)

    Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi

    2013-10-07

    We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.

  17. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  18. A New Method Presentation for Fault Location in Power Transformers

    OpenAIRE

    Hossein Mohammadpour; Rahman Dashti

    2011-01-01

    Power transformers are among the most important and expensive equipments in the electric power systems. Consequently the transformer protection is an essential part of the system protection. This paper presents a new method for locating transformer winding faults such as turn-to-turn, turn-to-core, turn-totransformer body, turn-to-earth, and high voltage winding to low voltage winding. In this study the current and voltage signals of input and output terminals of the tran...

  19. A New Fault Diagnosis Algorithm for PMSG Wind Turbine Power Converters under Variable Wind Speed Conditions

    Directory of Open Access Journals (Sweden)

    Yingning Qiu

    2016-07-01

    Full Text Available Although Permanent Magnet Synchronous Generator (PMSG wind turbines (WTs mitigate gearbox impacts, they requires high reliability of generators and converters. Statistical analysis shows that the failure rate of direct-drive PMSG wind turbines’ generators and inverters are high. Intelligent fault diagnosis algorithms to detect inverters faults is a premise for the condition monitoring system aimed at improving wind turbines’ reliability and availability. The influences of random wind speed and diversified control strategies lead to challenges for developing intelligent fault diagnosis algorithms for converters. This paper studies open-circuit fault features of wind turbine converters in variable wind speed situations through systematic simulation and experiment. A new fault diagnosis algorithm named Wind Speed Based Normalized Current Trajectory is proposed and used to accurately detect and locate faulted IGBT in the circuit arms. It is compared to direct current monitoring and current vector trajectory pattern approaches. The results show that the proposed method has advantages in the accuracy of fault diagnosis and has superior anti-noise capability in variable wind speed situations. The impact of the control strategy is also identified. Experimental results demonstrate its applicability on practical WT condition monitoring system which is used to improve wind turbine reliability and reduce their maintenance cost.

  20. Algorithms and programs for consequence diagram and fault tree construction

    International Nuclear Information System (INIS)

    Hollo, E.; Taylor, J.R.

    1976-12-01

    A presentation of algorithms and programs for consequence diagram and sequential fault tree construction that are intended for reliability and disturbance analysis of large systems. The system to be analyzed must be given as a block diagram formed by mini fault trees of individual system components. The programs were written in LISP programming language and run on a PDP8 computer with 8k words of storage. A description is given of the methods used and of the program construction and working. (author)

  1. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  2. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    Science.gov (United States)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a

  3. Using tensorial electrical resistivity survey to locate fault systems

    International Nuclear Information System (INIS)

    Monteiro Santos, Fernando A; Plancha, João P; Marques, Jorge; Perea, Hector; Cabral, João; Massoud, Usama

    2009-01-01

    This paper deals with the use of the tensorial resistivity method for fault orientation and macroanisotropy characterization. The rotational properties of the apparent resistivity tensor are presented using 3D synthetic models representing structures with a dominant direction of low resistivity and vertical discontinuities. It is demonstrated that polar diagrams of the elements of the tensor are effective in delineating those structures. As the apparent resistivity tensor shows great inefficacy in investigating the depth of the structures, it is advised to accomplish tensorial surveys with the application of other geophysical methods. An experimental example, including tensorial, dipole–dipole and time domain surveys, is presented to illustrate the potentiality of the method. The dipole–dipole model shows high-resistivity contrasts which were interpreted as corresponding to faults crossing the area. The results from the time domain electromagnetic (TEM) sounding show high-resistivity values till depths of 40–60 m at the north part of the area. In the southern part of the survey area the soundings show an upper layer with low-resistivity values (around 30 Ω m) followed by a more resistive bedrock (resistivity >100 Ω m) at a depth ranging from 15 to 30 m. The soundings in the central part of the survey area show more variability. A thin conductive overburden is followed by a more resistive layer with resistivity in the range of 80–1800 Ω m. The north and south limits of the central part of the area as revealed by TEM survey are roughly E–W oriented and coincident with the north fault scarp and the southernmost fault detected by the dipole–dipole survey. The pattern of the polar diagrams calculated from tensorial resistivity data clearly indicates the presence of a contact between two blocks at south of the survey area with the low-resistivity block located southwards. The presence of other two faults is not so clear from the polar diagram patterns, but

  4. Development of an accurate transmission line fault locator using the global positioning system satellites

    Science.gov (United States)

    Lee, Harry

    1994-01-01

    A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.

  5. A method for detection and location of high resistance earth faults

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S; Lehtonen, M [VTT Energy, Espoo (Finland); Antila, E [ABB Transmit Oy (Finland)

    1998-08-01

    In the first part of this presentation, the theory of earth faults in unearthed and compensated power systems is briefly presented. The main factors affecting the high resistance fault detection are outlined and common practices for earth fault protection in present systems are summarized. The algorithms of the new method for high resistance fault detection and location are then presented. These are based on the change of neutral voltage and zero sequence currents, measured at the high voltage / medium voltage substation and also at the distribution line locations. The performance of the method is analyzed, and the possible error sources discussed. Among these are, for instance, switching actions, thunder storms and heavy snow fall. The feasibility of the method is then verified by an analysis based both on simulated data, which was derived using an EMTP-ATP simulator, and by real system data recorded during field tests at three substations. For the error source analysis, some real case data recorded during natural power system events, is also used

  6. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  7. A fast BDD algorithm for large coherent fault trees analysis

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jaejoo

    2004-01-01

    Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If-Then-Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work. This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm

  8. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Science.gov (United States)

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  9. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Directory of Open Access Journals (Sweden)

    Jian Wan

    2011-06-01

    Full Text Available This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  10. Stochastic Resonance algorithms to enhance damage detection in bearing faults

    Directory of Open Access Journals (Sweden)

    Castiglione Roberto

    2015-01-01

    Full Text Available Stochastic Resonance is a phenomenon, studied and mainly exploited in telecommunication, which permits the amplification and detection of weak signals by the assistance of noise. The first papers on this technique are dated early 80 s and were developed to explain the periodically recurrent ice ages. Other applications mainly concern neuroscience, biology, medicine and obviously signal analysis and processing. Recently, some researchers have applied the technique for detecting faults in mechanical systems and bearings. In this paper, we try to better understand the conditions of applicability and which is the best algorithm to be adopted for these purposes. In fact, to get the methodology profitable and efficient to enhance the signal spikes due to fault in rings and balls/rollers of bearings, some parameters have to be properly selected. This is a problem since in system identification this procedure should be as blind as possible. Two algorithms are analysed: the first exploits classical SR with three parameters mutually dependent, while the other uses Woods-Saxon potential, with three parameters yet but holding a different meaning. The comparison of the performances of the two algorithms and the optimal choice of their parameters are the scopes of this paper. Algorithms are tested on simulated and experimental data showing an evident capacity of increasing the signal to noise ratio.

  11. Characterizing the structural maturity of fault zones using high-resolution earthquake locations.

    Science.gov (United States)

    Perrin, C.; Waldhauser, F.; Scholz, C. H.

    2017-12-01

    We use high-resolution earthquake locations to characterize the three-dimensional structure of active faults in California and how it evolves with fault structural maturity. We investigate the distribution of aftershocks of several recent large earthquakes that occurred on immature faults (i.e., slow moving and small cumulative displacement), such as the 1992 (Mw7.3) Landers and 1999 (Mw7.1) Hector Mine events, and earthquakes that occurred on mature faults, such as the 1984 (Mw6.2) Morgan Hill and 2004 (Mw6.0) Parkfield events. Unlike previous studies which typically estimated the width of fault zones from the distribution of earthquakes perpendicular to the surface fault trace, we resolve fault zone widths with respect to the 3D fault surface estimated from principal component analysis of local seismicity. We find that the zone of brittle deformation around the fault core is narrower along mature faults compared to immature faults. We observe a rapid fall off of the number of events at a distance range of 70 - 100 m from the main fault surface of mature faults (140-200 m fault zone width), and 200-300 m from the fault surface of immature faults (400-600 m fault zone width). These observations are in good agreement with fault zone widths estimated from guided waves trapped in low velocity damage zones. The total width of the active zone of deformation surrounding the main fault plane reach 1.2 km and 2-4 km for mature and immature faults, respectively. The wider zone of deformation presumably reflects the increased heterogeneity in the stress field along complex and discontinuous faults strands that make up immature faults. In contrast, narrower deformation zones tend to align with well-defined fault planes of mature faults where most of the deformation is concentrated. Our results are in line with previous studies suggesting that surface fault traces become smoother, and thus fault zones simpler, as cumulative fault slip increases.

  12. Reconnaissance geophysics to locate major faults in clays

    International Nuclear Information System (INIS)

    Jackson, P.D.; Hallam, J.R.; Raines, M.G.; Rainsbury, M.P.; Greenwood, P.G.; Busby, J.P.

    1991-01-01

    Trial surveys using resistivity, seismic refraction and electromagnetic techniques have been carried out at two potential research sites on Jurassic clays. A previously unknown major fault has been detected at Down Ampney and mapped during the main survey to a precision of 5 to 10 metres by resistivity profiling using a Schlumberger array, optimized from pre-existing data and model studies. The fault is identified by a strong characteristic signature which results from the fault displacement and a zone of disturbance within the clay. This method is rapid, provides high resolution and permits immediate field interpretation

  13. Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains

    Directory of Open Access Journals (Sweden)

    Volker Turau

    2018-05-01

    Full Text Available The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm.

  14. Precise tremor source locations and amplitude variations along the lower-crustal central San Andreas Fault

    Science.gov (United States)

    Shelly, David R.; Hardebeck, Jeanne L.

    2010-01-01

    We precisely locate 88 tremor families along the central San Andreas Fault using a 3D velocity model and numerous P and S wave arrival times estimated from seismogram stacks of up to 400 events per tremor family. Maximum tremor amplitudes vary along the fault by at least a factor of 7, with by far the strongest sources along a 25 km section of the fault southeast of Parkfield. We also identify many weaker tremor families, which have largely escaped prior detection. Together, these sources extend 150 km along the fault, beneath creeping, transitional, and locked sections of the upper crustal fault. Depths are mostly between 18 and 28 km, in the lower crust. Epicenters are concentrated within 3 km of the surface trace, implying a nearly vertical fault. A prominent gap in detectible activity is located directly beneath the region of maximum slip in the 2004 magnitude 6.0 Parkfield earthquake.

  15. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2012-01-01

    Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  16. Online Location of Faults on AC Cables in Underground Transmission Systems

    DEFF Research Database (Denmark)

    Jensen, Christian Flytkjær

    under fault conditions well, but the accuracy of the calculated impedance is low for fault location purposes. The neural networks can therefore not be trained and no impedance-based fault location method can be used for crossbonded cables or hybrid lines. The use of travelling wave-based methods...... connection to verify the proposed method. Faults, at reduced a voltage are artificially applied in the cable system and the transient response is measured at two terminals at the cable’s ends. The measurements are time-synchronised and it is found that a very accurate estimation of the fault location can......A transmission grid is normally laid out as an almost pure overhead line (OHL) network. The introduction of transmission voltage level XLPE cables and the increasing interest in the environmental impact of OHL has resulted in an increasing interest in the use of underground cables on transmission...

  17. New algorithm to detect modules in a fault tree for a PSA

    International Nuclear Information System (INIS)

    Jung, Woo Sik

    2015-01-01

    A module or independent subtree is a part of a fault tree whose child gates or basic events are not repeated in the remaining part of the fault tree. Modules are necessarily employed in order to reduce the computational costs of fault tree quantification. This paper presents a new linear time algorithm to detect modules of large fault trees. The size of cut sets can be substantially reduced by replacing independent subtrees in a fault tree with super-components. Chatterjee and Birnbaum developed properties of modules, and demonstrated their use in the fault tree analysis. Locks expanded the concept of modules to non-coherent fault trees. Independent subtrees were manually identified while coding a fault tree for computer analysis. However, nowadays, the independent subtrees are automatically identified by the fault tree solver. A Dutuit and Rauzy (DR) algorithm to detect modules of a fault tree for coherent or non-coherent fault tree was proposed in 1996. It has been well known that this algorithm quickly detects modules since it is a linear time algorithm. The new algorithm minimizes computational memory and quickly detects modules. Furthermore, it can be easily implemented into industry fault tree solvers that are based on traditional Boolean algebra, binary decision diagrams (BDDs), or Zero-suppressed BDDs. The new algorithm employs only two scalar variables in Eqs. to that are volatile information. After finishing the traversal and module detection of each node, the volatile information is destroyed. Thus, the new algorithm does not employ any other additional computational memory and operations. It is recommended that this method be implemented into fault tree solvers for efficient probabilistic safety assessment (PSA) of nuclear power plants

  18. New algorithm to detect modules in a fault tree for a PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong University, Seoul (Korea, Republic of)

    2015-05-15

    A module or independent subtree is a part of a fault tree whose child gates or basic events are not repeated in the remaining part of the fault tree. Modules are necessarily employed in order to reduce the computational costs of fault tree quantification. This paper presents a new linear time algorithm to detect modules of large fault trees. The size of cut sets can be substantially reduced by replacing independent subtrees in a fault tree with super-components. Chatterjee and Birnbaum developed properties of modules, and demonstrated their use in the fault tree analysis. Locks expanded the concept of modules to non-coherent fault trees. Independent subtrees were manually identified while coding a fault tree for computer analysis. However, nowadays, the independent subtrees are automatically identified by the fault tree solver. A Dutuit and Rauzy (DR) algorithm to detect modules of a fault tree for coherent or non-coherent fault tree was proposed in 1996. It has been well known that this algorithm quickly detects modules since it is a linear time algorithm. The new algorithm minimizes computational memory and quickly detects modules. Furthermore, it can be easily implemented into industry fault tree solvers that are based on traditional Boolean algebra, binary decision diagrams (BDDs), or Zero-suppressed BDDs. The new algorithm employs only two scalar variables in Eqs. to that are volatile information. After finishing the traversal and module detection of each node, the volatile information is destroyed. Thus, the new algorithm does not employ any other additional computational memory and operations. It is recommended that this method be implemented into fault tree solvers for efficient probabilistic safety assessment (PSA) of nuclear power plants.

  19. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  20. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    International Nuclear Information System (INIS)

    Wang, Y.

    2013-01-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  1. Scalable Fault-Tolerant Location Management Scheme for Mobile IP

    Directory of Open Access Journals (Sweden)

    JinHo Ahn

    2001-11-01

    Full Text Available As the number of mobile nodes registering with a network rapidly increases in Mobile IP, multiple mobility (home of foreign agents can be allocated to a network in order to improve performance and availability. Previous fault tolerant schemes (denoted by PRT schemes to mask failures of the mobility agents use passive replication techniques. However, they result in high failure-free latency during registration process if the number of mobility agents in the same network increases, and force each mobility agent to manage bindings of all the mobile nodes registering with its network. In this paper, we present a new fault-tolerant scheme (denoted by CML scheme using checkpointing and message logging techniques. The CML scheme achieves low failure-free latency even if the number of mobility agents in a network increases, and improves scalability to a large number of mobile nodes registering with each network compared with the PRT schemes. Additionally, the CML scheme allows each failed mobility agent to recover bindings of the mobile nodes registering with the mobility agent when it is repaired even if all the other mobility agents in the same network concurrently fail.

  2. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2018-01-01

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a

  3. Groundwater penetrating radar and high resolution seismic for locating shallow faults in unconsolidated sediments

    International Nuclear Information System (INIS)

    Wyatt, D.E.

    1993-01-01

    Faults in shallow, unconsolidated sediments, particularly in coastal plain settings, are very difficult to discern during subsurface exploration yet have critical impact to groundwater flow, contaminant transport and geotechnical evaluations. This paper presents a case study using cross-over geophysical technologies in an area where shallow faulting is probable and known contamination exists. A comparison is made between Wenner and dipole-dipole resistivity data, ground penetrating radar, and high resolution seismic data. Data from these methods were verified with a cone penetrometer investigation for subsurface lithology and compared to existing monitoring well data. Interpretations from these techniques are compared with actual and theoretical shallow faulting found in the literature. The results of this study suggests that (1) the CPT study, combined with the monitoring well data may suggest that discontinuities in correlatable zones may indicate that faulting is present (2) the addition of the Wenner and dipole-dipole data may further suggest that offset zones exist in the shallow subsurface but not allow specific fault planes or fault stranding to be mapped (3) the high resolution seismic data will image faults to within a few feet of the surface but does not have the resolution to identify the faulting on the scale of our models, however it will suggest locations for upward continuation of faulted zones (4) offset 100 MHz and 200 MHz CMP GPR will image zones and features that may be fault planes and strands similar to our models (5) 300 MHz GPR will image higher resolution features that may suggest the presence of deeper faults and strands, and (6) the combination of all of the tools in this study, particularly the GPR and seismic may allow for the mapping of small scale, shallow faulting in unconsolidated sediments

  4. SENSORS FAULT DIAGNOSIS ALGORITHM DESIGN OF A HYDRAULIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Matej ORAVEC

    2017-06-01

    Full Text Available This article presents the sensors fault diagnosis system design for the hydraulic system, which is based on the group of the three fault estimation filters. These filters are used for estimation of the system states and sensors fault magnitude. Also, this article briefly stated the hydraulic system state control design with integrator, which is important assumption for the fault diagnosis system design. The sensors fault diagnosis system is implemented into the Matlab/Simulink environment and it is verified using the controlled hydraulic system simulation model. Verification of the designed fault diagnosis system is realized by series of experiments, which simulates sensors faults. The results of the experiments are briefly presented in the last part of this article.

  5. Empirical Relationships Among Magnitude and Surface Rupture Characteristics of Strike-Slip Faults: Effect of Fault (System) Geometry and Observation Location, Dervided From Numerical Modeling

    Science.gov (United States)

    Zielke, O.; Arrowsmith, J.

    2007-12-01

    In order to determine the magnitude of pre-historic earthquakes, surface rupture length, average and maximum surface displacement are utilized, assuming that an earthquake of a specific size will cause surface features of correlated size. The well known Wells and Coppersmith (1994) paper and other studies defined empirical relationships between these and other parameters, based on historic events with independently known magnitude and rupture characteristics. However, these relationships show relatively large standard deviations and they are based only on a small number of events. To improve these first-order empirical relationships, the observation location relative to the rupture extent within the regional tectonic framework should be accounted for. This however cannot be done based on natural seismicity because of the limited size of datasets on large earthquakes. We have developed the numerical model FIMozFric, based on derivations by Okada (1992) to create synthetic seismic records for a given fault or fault system under the influence of either slip- or stress boundary conditions. Our model features A) the introduction of an upper and lower aseismic zone, B) a simple Coulomb friction law, C) bulk parameters simulating fault heterogeneity, and D) a fault interaction algorithm handling the large number of fault patches (typically 5,000-10,000). The joint implementation of these features produces well behaved synthetic seismic catalogs and realistic relationships among magnitude and surface rupture characteristics which are well within the error of the results by Wells and Coppersmith (1994). Furthermore, we use the synthetic seismic records to show that the relationships between magntiude and rupture characteristics are a function of the observation location within the regional tectonic framework. The model presented here can to provide paleoseismologists with a tool to improve magnitude estimates from surface rupture characteristics, by incorporating the

  6. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-01-01

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431

  7. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  8. Line-to-Line Fault Analysis and Location in a VSC-Based Low-Voltage DC Distribution Network

    Directory of Open Access Journals (Sweden)

    Shi-Min Xue

    2018-03-01

    Full Text Available A DC cable short-circuit fault is the most severe fault type that occurs in DC distribution networks, having a negative impact on transmission equipment and the stability of system operation. When a short-circuit fault occurs in a DC distribution network based on a voltage source converter (VSC, an in-depth analysis and characterization of the fault is of great significance to establish relay protection, devise fault current limiters and realize fault location. However, research on short-circuit faults in VSC-based low-voltage DC (LVDC systems, which are greatly different from high-voltage DC (HVDC systems, is currently stagnant. The existing research in this area is not conclusive, with further study required to explain findings in HVDC systems that do not fit with simulated results or lack thorough theoretical analyses. In this paper, faults are divided into transient- and steady-state faults, and detailed formulas are provided. A more thorough and practical theoretical analysis with fewer errors can be used to develop protection schemes and short-circuit fault locations based on transient- and steady-state analytic formulas. Compared to the classical methods, the fault analyses in this paper provide more accurate computed results of fault current. Thus, the fault location method can rapidly evaluate the distance between the fault and converter. The analyses of error increase and an improved handshaking method coordinating with the proposed location method are presented.

  9. Fault Detection and Location of IGBT Short-Circuit Failure in Modular Multilevel Converters

    Directory of Open Access Journals (Sweden)

    Bin Jiang

    2018-06-01

    Full Text Available A single fault detection and location for Modular Multilevel Converter (MMC is of great significance, as numbers of sub-modules (SMs in MMC are connected in series. In this paper, a novel fault detection and location method is proposed for MMC in terms of the Insulated Gate Bipolar Translator (IGBT short-circuit failure in SM. The characteristics of IGBT short-circuit failures are analyzed, based on which a Differential Comparison Low-Voltage Detection Method (DCLVDM is proposed to detect the short-circuit fault. Lastly, the faulty IGBT is located based on the capacitor voltage of the faulty SM by Continuous Wavelet Transform (CWT. Simulations have been done in the simulation software PSCAD/EMTDC and the results confirm the validity and reliability of the proposed method.

  10. The Study of Fault Location for Front-End Electronics System

    International Nuclear Information System (INIS)

    Zhang Fan; Wang Dong; Huang Guangming; Zhou Daicui

    2009-01-01

    Since some devices on the latest developed 250 ALICE/PHOS Front-end electronics (FEE) system cards had been partly or completely damaged during lead-free soldering. To alleviate the influence on the performance of FEE system and to locate fault related FPGA accurately, we should find a method for locating fault of FEE system based on the deep study of FPGA configuration scheme. It emphasized on the problems such as JTAG configuration of multi-devices, PS configuration based on EPC series configuration devices and auto re-configuration of FPGA. The result of the massive FEE system cards testing and repairing show that that location method can accurately and quickly target the fault point related FPGA on FEE system cards. (authors)

  11. Mixed-fault diagnosis in induction motors considering varying load and broken bars location

    International Nuclear Information System (INIS)

    Faiz, Jawad; Ebrahimi, Bashir Mahdi; Toliyat, H.A.; Abu-Elhaija, W.S.

    2010-01-01

    Simultaneous static eccentricity and broken rotor bars faults, called mixed-fault, in a three-phase squirrel-cage induction motor is analyzed by time stepping finite element method using fast Fourier transform. Generally, there is an inherent static eccentricity (below 10%) in a broken rotor bar induction motor and therefore study of the mixed-fault case could be considered as a real case. Stator current frequency spectrum over low frequencies, medium frequencies and high frequencies are analyzed; static eccentricity diagnosis and its distinguishing from the rotor bars breakage in the mixed-fault case are described. The contribution of the static eccentricity and broken rotor bars faults are precisely determined. Influence of the broken bars location upon the amplitudes of the harmonics due to the mixed-fault is also investigated. It is shown that the amplitudes of harmonics due to broken bars placed on one pole are larger than the case in which the broken bars are distributed on different poles. In addition, influence of varying load on the amplitudes of the harmonics due to the mixed-fault is studied and indicated that the higher load increases the harmonics components amplitudes due to the broken bars while the static eccentricity degree decreases. Simulation results are confirmed by the experimental results.

  12. SLG(Single-Line-to-Ground Fault Location in NUGS(Neutral Un-effectively Grounded System

    Directory of Open Access Journals (Sweden)

    Zhang Wenhai

    2018-01-01

    Full Text Available This paper reviews the SLG(Single-Line-to-Ground fault location methods in NUGS(Neutral Un-effectively Grounded System, including ungrounded system, resonant grounded system and high-resistance grounded system which are widely used in Northern Europe and China. This type of fault is hard to detect and location because fault current is the sum of capacitance current of the system which is always small(about tens of amperes. The characteristics of SLG fault in NUGS and the fault location methods are introduced in the paper.

  13. Methodology for selection of attributes and operating conditions for SVM-Based fault locator's

    Directory of Open Access Journals (Sweden)

    Debbie Johan Arredondo Arteaga

    2017-01-01

    Full Text Available Context: Energy distribution companies must employ strategies to meet their timely and high quality service, and fault-locating techniques represent and agile alternative for restoring the electric service in the power distribution due to the size of distribution services (generally large and the usual interruptions in the service. However, these techniques are not robust enough and present some limitations in both computational cost and the mathematical description of the models they use. Method: This paper performs an analysis based on a Support Vector Machine for the evaluation of the proper conditions to adjust and validate a fault locator for distribution systems; so that it is possible to determine the minimum number of operating conditions that allow to achieve a good performance with a low computational effort. Results: We tested the proposed methodology in a prototypical distribution circuit, located in a rural area of Colombia. This circuit has a voltage of 34.5 KV and is subdivided in 20 zones. Additionally, the characteristics of the circuit allowed us to obtain a database of 630.000 records of single-phase faults and different operating conditions. As a result, we could determine that the locator showed a performance above 98% with 200 suitable selected operating conditions. Conclusions: It is possible to improve the performance of fault locators based on Support Vector Machine. Specifically, these improvements are achieved by properly selecting optimal operating conditions and attributes, since they directly affect the performance in terms of efficiency and the computational cost.

  14. Hydraulic Pump Fault Diagnosis Control Research Based on PARD-BP Algorithm

    Directory of Open Access Journals (Sweden)

    LV Dongmei

    2014-12-01

    Full Text Available Combining working principle and failure mechanism of RZU2000HM hydraulic press, with its present fault cases being collected, the working principle of the oil pressure and faults phenomenon of the hydraulic power unit –swash-plate axial piston pump were studied with some emphasis, whose faults will directly affect the dynamic performance of the oil pressure and flow. In order to make hydraulic power unit work reliably, PARD-BP (Pruning Algorithm based Random Degree neural network fault algorithm was introduced, with swash-plate axial piston pump’s vibration fault sample data regarded as input, and fault mode matrix regarded as target output, so that PARD-BP algorithm could be trained. In the end, the vibration results were verified by the vibration modal test, and it was shown that the biggest upward peaks of vacuum pump in X-direction, Y-direction and Z- direction have fallen by 30.49 %, 21.13 % and 18.73 % respectively, so that the reliability of the fact that PARD-BP algorithm could be used for the online fault detection and diagnosis of the hydraulic pump was verified.

  15. Locating Very-Low-Frequency Earthquakes in the San Andreas Fault.

    Science.gov (United States)

    Peña-Castro, A. F.; Harrington, R. M.; Cochran, E. S.

    2016-12-01

    The portion of tectonic fault where rheological properties transtition from brittle to ductile hosts a variety of seismic signals suggesting a range of slip velocities. In subduction zones, the two dominantly observed seismic signals include very-low frequency earthquakes ( VLFEs), and low-frequency earthquakes (LFEs) or tectonic tremor. Tremor and LFE are also commonly observed in transform faults, however, VLFEs have been reported dominantly in subduction zone environments. Here we show some of the first known observations of VLFEs occurring on a plate boundary transform fault, the San Andreas Fault (SAF) between the Cholame-Parkfield segment in California. We detect VLFEs using both permanent and temporary stations in 2010-2011 within approximately 70 km of Cholame, California. We search continous waveforms filtered from 0.02-0.05 Hz, and remove time windows containing teleseismic events and local earthquakes, as identified in the global Centroid Moment Tensor (CMT) and the Northern California Seismic Network (NCSN) catalog. We estimate the VLFE locations by converting the signal into envelopes, and cross-correlating them for phase-picking, similar to procedures used for locating tectonic tremor. We first perform epicentral location using a grid search method and estimate a hypocenter location using Hypoinverse and a shear-wave velocity model when the epicenter is located close to the SAF trace. We account for the velocity contrast across the fault using separate 1D velocity models for stations on each side. Estimated hypocentral VLFE depths are similar to tremor catalog depths ( 15-30 km). Only a few VLFEs produced robust hypocentral locations, presumably due to the difficulty in picking accurate phase arrivals with such a low-frequency signal. However, for events for which no location could be obtained, the moveout of phase arrivals across the stations were similar in character, suggesting that other observed VLFEs occurred in close proximity.

  16. Decision making algorithms for hydro-power plant location

    CERN Document Server

    Majumder, Mrinmoy

    2013-01-01

    The present study has attempted to apply the advantage of neuro-genetic algorithms for optimal decision making in maximum utilization of natural resources. Hydro-power is one of the inexpensive, but a reliable source of alternative energy which is foreseen as the possible answer to the present crisis in the energy sector. However, the major problem related to hydro-energy is its dependency on location. An ideal location can produce maximum energy with minimum loss. Besides, such power-plant also requires substantial amount of land which is a precious resource nowadays due to the rapid and unco

  17. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.

    2013-07-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  18. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.

    Science.gov (United States)

    Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto

    2017-09-29

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  19. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2017-09-01

    Full Text Available The use of Unmanned Aerial Vehicles (UAVs has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs’ flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  20. A Novel Dual Separate Paths (DSP) Algorithm Providing Fault-Tolerant Communication for Wireless Sensor Networks.

    Science.gov (United States)

    Tien, Nguyen Xuan; Kim, Semog; Rhee, Jong Myung; Park, Sang Yoon

    2017-07-25

    Fault tolerance has long been a major concern for sensor communications in fault-tolerant cyber physical systems (CPSs). Network failure problems often occur in wireless sensor networks (WSNs) due to various factors such as the insufficient power of sensor nodes, the dislocation of sensor nodes, the unstable state of wireless links, and unpredictable environmental interference. Fault tolerance is thus one of the key requirements for data communications in WSN applications. This paper proposes a novel path redundancy-based algorithm, called dual separate paths (DSP), that provides fault-tolerant communication with the improvement of the network traffic performance for WSN applications, such as fault-tolerant CPSs. The proposed DSP algorithm establishes two separate paths between a source and a destination in a network based on the network topology information. These paths are node-disjoint paths and have optimal path distances. Unicast frames are delivered from the source to the destination in the network through the dual paths, providing fault-tolerant communication and reducing redundant unicast traffic for the network. The DSP algorithm can be applied to wired and wireless networks, such as WSNs, to provide seamless fault-tolerant communication for mission-critical and life-critical applications such as fault-tolerant CPSs. The analyzed and simulated results show that the DSP-based approach not only provides fault-tolerant communication, but also improves network traffic performance. For the case study in this paper, when the DSP algorithm was applied to high-availability seamless redundancy (HSR) networks, the proposed DSP-based approach reduced the network traffic by 80% to 88% compared with the standard HSR protocol, thus improving network traffic performance.

  1. A Location-Based Business Information Recommendation Algorithm

    Directory of Open Access Journals (Sweden)

    Shudong Liu

    2015-01-01

    Full Text Available Recently, many researches on information (e.g., POI, ADs recommendation based on location have been done in both research and industry. In this paper, we firstly construct a region-based location graph (RLG, in which region node respectively connects with user node and business information node, and then we propose a location-based recommendation algorithm based on RLG, which can combine with user short-ranged mobility formed by daily activity and long-distance mobility formed by social network ties and sequentially can recommend local business information and long-distance business information to users. Moreover, it can combine user-based collaborative filtering with item-based collaborative filtering, and it can alleviate cold start problem which traditional recommender systems often suffer from. Empirical studies from large-scale real-world data from Yelp demonstrate that our method outperforms other methods on the aspect of recommendation accuracy.

  2. Research on fault diagnosis of nuclear power plants based on genetic algorithms and fuzzy logic

    International Nuclear Information System (INIS)

    Zhou Yangping; Zhao Bingquan

    2001-01-01

    Based on genetic algorithms and fuzzy logic and using expert knowledge, mini-knowledge tree model and standard signals from simulator, a new fuzzy-genetic method is developed to fault diagnosis in nuclear power plants. A new replacement method of genetic algorithms is adopted. Fuzzy logic is used to calculate the fitness of the strings in genetic algorithms. Experiments on the simulator show it can deal with the uncertainty and the fuzzy factor

  3. Fault diagnosis for wind turbine planetary ring gear via a meshing resonance based filtering algorithm.

    Science.gov (United States)

    Wang, Tianyang; Chu, Fulei; Han, Qinkai

    2017-03-01

    Identifying the differences between the spectra or envelope spectra of a faulty signal and a healthy baseline signal is an efficient planetary gearbox local fault detection strategy. However, causes other than local faults can also generate the characteristic frequency of a ring gear fault; this may further affect the detection of a local fault. To address this issue, a new filtering algorithm based on the meshing resonance phenomenon is proposed. In detail, the raw signal is first decomposed into different frequency bands and levels. Then, a new meshing index and an MRgram are constructed to determine which bands belong to the meshing resonance frequency band. Furthermore, an optimal filter band is selected from this MRgram. Finally, the ring gear fault can be detected according to the envelope spectrum of the band-pass filtering result. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. The 2009 MW MW 6.1 L'Aquila fault system imaged by 64k earthquake locations

    International Nuclear Information System (INIS)

    Valoroso, Luisa

    2016-01-01

    On April 6 2009, a MW 6.1 normal-faulting earthquake struck the axial area of the Abruzzo region in central Italy. We investigate the complex architecture and mechanics of the activated fault system by using 64k high-resolution foreshock and aftershock locations. The fault system is composed by two major SW dipping segments forming an en-echelon NW trending system about 50 km long: the high-angle L’Aquila fault and the listric Campotosto fault, located in the first 10 km depth. From the beginning of 2009, fore shocks activated the deepest portion of the main shock fault. A week before the MW 6.1 event, the largest (MW 4.0) foreshock triggered seismicity migration along a minor off-fault segment. Seismicity jumped back to the main plane a few hours before the main shock. High-precision locations allowed to peer into the fault zone showing complex geological structures from the metre to the kilometre scale, analogous to those observed by field studies and seismic profiles. Also, we were able to investigate important aspects of earthquakes nucleation and propagation through the upper crust in carbonate-bearing rocks such as: the role of fluids in normal-faulting earthquakes; how crustal faults terminate at depths; the key role of fault zone structure in the earthquake rupture evolution processes.

  5. Method of fault diagnosis in nuclear power plant base on genetic algorithm and knowledge base

    International Nuclear Information System (INIS)

    Zhou Yangping; Zhao Bingquan

    2000-01-01

    Via using the knowledge base, combining Genetic Algorithm and classical probability and contraposing the characteristic of the fault diagnosis of NPP. The authors put forward a method of fault diagnosis. In the process of fault diagnosis, this method contact the state of NPP with the colony in GA and transform the colony to get the individual that adapts to the condition. On the 950MW full size simulator in Beijing NPP simulation training center, experimentation shows it has comparative adaptability to the imperfection of expert knowledge, illusive signal and other instance

  6. Stochastic Resonance algorithms to enhance damage detection in bearing faults

    OpenAIRE

    Castiglione Roberto; Garibaldi Luigi; Marchesiello Stefano

    2015-01-01

    Stochastic Resonance is a phenomenon, studied and mainly exploited in telecommunication, which permits the amplification and detection of weak signals by the assistance of noise. The first papers on this technique are dated early 80 s and were developed to explain the periodically recurrent ice ages. Other applications mainly concern neuroscience, biology, medicine and obviously signal analysis and processing. Recently, some researchers have applied the technique for detecting faults in mecha...

  7. A hybrid nested partitions algorithm for banking facility location problems

    KAUST Repository

    Xia, Li

    2010-07-01

    The facility location problem has been studied in many industries including banking network, chain stores, and wireless network. Maximal covering location problem (MCLP) is a general model for this type of problems. Motivated by a real-world banking facility optimization project, we propose an enhanced MCLP model which captures the important features of this practical problem, namely, varied costs and revenues, multitype facilities, and flexible coverage functions. To solve this practical problem, we apply an existing hybrid nested partitions algorithm to the large-scale situation. We further use heuristic-based extensions to generate feasible solutions more efficiently. In addition, the upper bound of this problem is introduced to study the quality of solutions. Numerical results demonstrate the effectiveness and efficiency of our approach. © 2010 IEEE.

  8. Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm

    Directory of Open Access Journals (Sweden)

    Guangbin Wang

    2015-01-01

    Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.

  9. Location and Position Determination Algorithm For Humanoid Soccer Robot

    Directory of Open Access Journals (Sweden)

    Oei Kurniawan Utomo

    2016-03-01

    Full Text Available The algorithm of location and position determination was designed for humanoid soccer robot. The robots have to be able to control the ball effectively on the field of Indonesian Robot Soccer Competition which has a size of 900 cm x 600 cm. The algorithm of location and position determination uses parameters, such as the goalpost’s thickness, the compass value, and the robot’s head servo value. The goalpost’s thickness is detected using The Centre of Gravity method. The width of the goalpost detected is analyzed using the principles of camera geometry to determine the distance between the robot and the goalpost. The tangent value of head servo’s tilt angle is used to determine the distance between the robot and the ball. The distance between robot-goalpost and the distance between robot-ball are processed with the difference of head servo’s pan angle and compass value using trigonometric formulas to determine the coordinates of the robot and the ball in the Cartesian coordinates.

  10. An inverse source location algorithm for radiation portal monitor applications

    International Nuclear Information System (INIS)

    Miller, Karen A.; Charlton, William S.

    2010-01-01

    Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.

  11. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  12. Power flow analysis and optimal locations of resistive type superconducting fault current limiters.

    Science.gov (United States)

    Zhang, Xiuchang; Ruiz, Harold S; Geng, Jianzhao; Shen, Boyang; Fu, Lin; Zhang, Heng; Coombs, Tim A

    2016-01-01

    Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E - J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.

  13. A Test Generation Framework for Distributed Fault-Tolerant Algorithms

    Science.gov (United States)

    Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.

    2009-01-01

    Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.

  14. Radar Determination of Fault Slip and Location in Partially Decorrelated Images

    Science.gov (United States)

    Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Stough, Timothy; Pierce, Marlon; Wang, Jun

    2017-06-01

    Faced with the challenge of thousands of frames of radar interferometric images, automated feature extraction promises to spur data understanding and highlight geophysically active land regions for further study. We have developed techniques for automatically determining surface fault slip and location using deformation images from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), which is similar to satellite-based SAR but has more mission flexibility and higher resolution (pixels are approximately 7 m). This radar interferometry provides a highly sensitive method, clearly indicating faults slipping at levels of 10 mm or less. But interferometric images are subject to decorrelation between revisit times, creating spots of bad data in the image. Our method begins with freely available data products from the UAVSAR mission, chiefly unwrapped interferograms, coherence images, and flight metadata. The computer vision techniques we use assume no data gaps or holes; so a preliminary step detects and removes spots of bad data and fills these holes by interpolation and blurring. Detected and partially validated surface fractures from earthquake main shocks, aftershocks, and aseismic-induced slip are shown for faults in California, including El Mayor-Cucapah (M7.2, 2010), the Ocotillo aftershock (M5.7, 2010), and South Napa (M6.0, 2014). Aseismic slip is detected on the San Andreas Fault from the El Mayor-Cucapah earthquake, in regions of highly patterned partial decorrelation. Validation is performed by comparing slip estimates from two interferograms with published ground truth measurements.

  15. Location of Faults in Power Transmission Lines Using the ARIMA Method

    Directory of Open Access Journals (Sweden)

    Danilo Pinto Moreira de Souza

    2017-10-01

    Full Text Available One of the major problems in transmission lines is the occurrence of failures that affect the quality of the electric power supplied, as the exact localization of the fault must be known for correction. In order to streamline the work of maintenance teams and standardize services, this paper proposes a method of locating faults in power transmission lines by analyzing the voltage oscillographic signals extracted at the line monitoring terminals. The developed method relates time series models obtained specifically for each failure pattern. The parameters of the autoregressive integrated moving average (ARIMA model are estimated in order to adjust the voltage curves and calculate the distance from the initial fault localization to the terminals. Simulations of the failures are performed through the ATPDraw ® (5.5 software and the analyses were completed using the RStudio ® (1.0.143 software. The results obtained with respect to the failures, which did not involve earth return, were satisfactory when compared with widely used techniques in the literature, particularly when the fault distance became larger in relation to the beginning of the transmission line.

  16. Identification tibia and fibula bone fracture location using scanline algorithm

    Science.gov (United States)

    Muchtar, M. A.; Simanjuntak, S. E.; Rahmat, R. F.; Mawengkang, H.; Zarlis, M.; Sitompul, O. S.; Winanto, I. D.; Andayani, U.; Syahputra, M. F.; Siregar, I.; Nasution, T. H.

    2018-03-01

    Fracture is a condition that there is a damage in the continuity of the bone, usually caused by stress, trauma or weak bones. The tibia and fibula are two separated-long bones in the lower leg, closely linked at the knee and ankle. Tibia/fibula fracture often happen when there is too much force applied to the bone that it can withstand. One of the way to identify the location of tibia/fibula fracture is to read X-ray image manually. Visual examination requires more time and allows for errors in identification due to the noise in image. In addition, reading X-ray needs highlighting background to make the objects in X-ray image appear more clearly. Therefore, a method is required to help radiologist to identify the location of tibia/fibula fracture. We propose some image-processing techniques for processing cruris image and Scan line algorithm for the identification of fracture location. The result shows that our proposed method is able to identify it and reach up to 87.5% of accuracy.

  17. Performance Estimation and Fault Diagnosis Based on Levenberg–Marquardt Algorithm for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Junjie Lu

    2018-01-01

    Full Text Available Establishing the schemes of accurate and computationally efficient performance estimation and fault diagnosis for turbofan engines has become a new research focus and challenges. It is able to increase reliability and stability of turbofan engine and reduce the life cycle costs. Accurate estimation of turbofan engine performance counts on thoroughly understanding the components’ performance, which is described by component characteristic maps and the fault of each component can be regarded as the change of characteristic maps. In this paper, a novel method based on a Levenberg–Marquardt (LM algorithm is proposed to enhance the fidelity of the performance estimation and the credibility of the fault diagnosis for the turbofan engine. The presented method utilizes the LM algorithm to figure out the operating point in the characteristic maps, preparing for performance estimation and fault diagnosis. The accuracy of the proposed method is evaluated for estimating performance parameters in the transient case with Rayleigh process noise and Gaussian measurement noise. The comparison among the extended Kalman filter (EKF method, the particle filter (PF method and the proposed method is implemented in the abrupt fault case and the gradual degeneration case and it has been shown that the proposed method has the capability to lead to more accurate result for performance estimation and fault diagnosis of turbofan engine than current popular EKF and PF diagnosis methods.

  18. Induced Voltages Ratio-Based Algorithm for Fault Detection, and Faulted Phase and Winding Identification of a Three-Winding Power Transformer

    Directory of Open Access Journals (Sweden)

    Byung Eun Lee

    2014-09-01

    Full Text Available This paper proposes an algorithm for fault detection, faulted phase and winding identification of a three-winding power transformer based on the induced voltages in the electrical power system. The ratio of the induced voltages of the primary-secondary, primary-tertiary and secondary-tertiary windings is the same as the corresponding turns ratio during normal operating conditions, magnetic inrush, and over-excitation. It differs from the turns ratio during an internal fault. For a single phase and a three-phase power transformer with wye-connected windings, the induced voltages of each pair of windings are estimated. For a three-phase power transformer with delta-connected windings, the induced voltage differences are estimated to use the line currents, because the delta winding currents are practically unavailable. Six detectors are suggested for fault detection. An additional three detectors and a rule for faulted phase and winding identification are presented as well. The proposed algorithm can not only detect an internal fault, but also identify the faulted phase and winding of a three-winding power transformer. The various test results with Electromagnetic Transients Program (EMTP-generated data show that the proposed algorithm successfully discriminates internal faults from normal operating conditions including magnetic inrush and over-excitation. This paper concludes by implementing the algorithm into a prototype relay based on a digital signal processor.

  19. Locating Leaks with TrustRank Algorithm Support

    Directory of Open Access Journals (Sweden)

    Luísa Ribeiro

    2015-03-01

    Full Text Available This paper presents a methodology to quantify and to locate leaks. The original contribution is the use of a tool based on the TrustRank algorithm for the selection of nodes for pressure monitoring. The results from these methodologies presented here are: (I A sensitivity analysis of the number of pressure transducers on the quality of the final solution; (II A reduction of the number of pipes to be inspected; and (III A focus on the problematic pipes which allows a better office planning of the inspection works to perform atthe field. To obtain these results, a methodology for the identification of probable leaky pipes and an estimate of their leakage flows is also presented. The potential of the methodology is illustrated with several case studies, considering different levels of water losses and different sets of pressure monitoring nodes. The results are discussed and the solutions obtained show the benefits of the developed methodologies.

  20. Objective Function and Learning Algorithm for the General Node Fault Situation.

    Science.gov (United States)

    Xiao, Yi; Feng, Rui-Bin; Leung, Chi-Sing; Sum, John

    2016-04-01

    Fault tolerance is one interesting property of artificial neural networks. However, the existing fault models are able to describe limited node fault situations only, such as stuck-at-zero and stuck-at-one. There is no general model that is able to describe a large class of node fault situations. This paper studies the performance of faulty radial basis function (RBF) networks for the general node fault situation. We first propose a general node fault model that is able to describe a large class of node fault situations, such as stuck-at-zero, stuck-at-one, and the stuck-at level being with arbitrary distribution. Afterward, we derive an expression to describe the performance of faulty RBF networks. An objective function is then identified from the formula. With the objective function, a training algorithm for the general node situation is developed. Finally, a mean prediction error (MPE) formula that is able to estimate the test set error of faulty networks is derived. The application of the MPE formula in the selection of basis width is elucidated. Simulation experiments are then performed to demonstrate the effectiveness of the proposed method.

  1. Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks

    National Research Council Canada - National Science Library

    Sorgaard, Duane

    2004-01-01

    .... A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance...

  2. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    Science.gov (United States)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  3. AN ISOMETRIC MAPPING BASED CO-LOCATION DECISION TREE ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Zhou

    2018-05-01

    Full Text Available Decision tree (DT induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT, which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1 The extraction method of exposed carbonate rocks is of high accuracy. (2 The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  4. Training the Recurrent neural network by the Fuzzy Min-Max algorithm for fault prediction

    International Nuclear Information System (INIS)

    Zemouri, Ryad; Racoceanu, Daniel; Zerhouni, Noureddine; Minca, Eugenia; Filip, Florin

    2009-01-01

    In this paper, we present a training technique of a Recurrent Radial Basis Function neural network for fault prediction. We use the Fuzzy Min-Max technique to initialize the k-center of the RRBF neural network. The k-means algorithm is then applied to calculate the centers that minimize the mean square error of the prediction task. The performances of the k-means algorithm are then boosted by the Fuzzy Min-Max technique.

  5. Evaluation of the location and recency of faulting near prospective surface facilities in Midway Valley, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2002-01-17

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  6. Evaluation of the location and recency of faulting near prospective surface facilities in Midway Valley, Nye County, Nevada

    International Nuclear Information System (INIS)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2002-01-01

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  7. Evaluation of the Location and Recency of Faulting Near Prospective Surface Facilities in Midway Valley, Nye County, Nevada

    Science.gov (United States)

    Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.

    2001-01-01

    Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern

  8. Improved algorithms for circuit fault diagnosis based on wavelet packet and neural network

    International Nuclear Information System (INIS)

    Zhang, W-Q; Xu, C

    2008-01-01

    In this paper, two improved BP neural network algorithms of fault diagnosis for analog circuit are presented through using optimal wavelet packet transform(OWPT) or incomplete wavelet packet transform(IWPT) as preprocessor. The purpose of preprocessing is to reduce the nodes in input layer and hidden layer of BP neural network, so that the neural network gains faster training and convergence speed. At first, we apply OWPT or IWPT to the response signal of circuit under test(CUT), and then calculate the normalization energy of each frequency band. The normalization energy is used to train the BP neural network to diagnose faulty components in the analog circuit. These two algorithms need small network size, while have faster learning and convergence speed. Finally, simulation results illustrate the two algorithms are effective for fault diagnosis

  9. Seismic Experiment at North Arizona To Locate Washington Fault - 3D Field Test

    KAUST Repository

    Hanafy, Sherif M

    2008-10-01

    No. of receivers in the inline direction: 80, Number of lines: 6, Receiver Interval: 1 m near the fault, 2 m away from the fault (Receivers 1 to 12 at 2 m intervals, receivers 12 to 51 at 1 m intervals, and receivers 51 to 80 at 2 m intervals), No. of shots in the inline direction: 40, Shot interval: 2 and 4 m (every other receiver location). Data Recording The data are recorded using two Bison equipment, each is 120 channels. We shot at all 240 shot locations and simultaneously recorded seismic traces at receivers 1 to 240 (using both Bisons), then we shot again at all 240 shot locations and we recorded at receivers 241 to 480. The data is rearranged to match the receiver order shown in Figure 3 where receiver 1 is at left-lower corner, receivers increase to 80 at right lower corner, then receiver 81 is back to left side at Y = 1.5 m, etc.

  10. Bearing Fault Detection Based on Maximum Likelihood Estimation and Optimized ANN Using the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Behrooz Attaran

    2015-01-01

    Full Text Available Rotating machinery is the most common machinery in industry. The root of the faults in rotating machinery is often faulty rolling element bearings. This paper presents a technique using optimized artificial neural network by the Bees Algorithm for automated diagnosis of localized faults in rolling element bearings. The inputs of this technique are a number of features (maximum likelihood estimation values, which are derived from the vibration signals of test data. The results shows that the performance of the proposed optimized system is better than most previous studies, even though it uses only two features. Effectiveness of the above method is illustrated using obtained bearing vibration data.

  11. Location-Aware Mobile Learning of Spatial Algorithms

    Science.gov (United States)

    Karavirta, Ville

    2013-01-01

    Learning an algorithm--a systematic sequence of operations for solving a problem with given input--is often difficult for students due to the abstract nature of the algorithms and the data they process. To help students understand the behavior of algorithms, a subfield in computing education research has focused on algorithm…

  12. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems.

    Directory of Open Access Journals (Sweden)

    Hajara Idris

    Full Text Available The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user's Quality of Service (QoS requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user's QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time.

  13. Fault diagnosis in spur gears based on genetic algorithm and random forest

    Science.gov (United States)

    Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan

    2016-03-01

    There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.

  14. The application of soil-gas geochemistry to precisely locate La Victoria fault near Paracotos (Venezuela)

    International Nuclear Information System (INIS)

    LaBrecque, J.J.; Rosales, P.A.; Cordoves, P.R.

    1999-01-01

    Full text: Measurements of radon (total radon, Radon-222 and Radon-220) and other soil-gases (CO 2 , O 2 and H 2 ) were performed routinely during 1998 and 1999 across a narrow valley near Paracotos, Venezuela in an attempt to precisely locate the La Victoria fault. The transect was about 300 meters long with eleven sampling points. The soil-gas probes were inserted to a depth of 45 cm in the beginning and later on completely to a depth of 63 cm. The radon sampling and measurements were accomplished with a Pylon AB-5 radiation monitor and Lucas scintillation cells. The other soil-gases were directly determined with an Anagas, CD95 monitor and an Infra-red Gas Analyzer (MKIIC) both coupled with a Hydrogen pod. The radon values for more than twenty different sampling periods over a two year period resulted with anomalous values between 75 and 150 meters along the transect. There were three consecutive anomalous values each time. But strangely, the anomalies of the radon values were in the form of a doublet at 116 and 141 meters rather than a simple single peak in the middle and the gas flow was similar for the sampling points between 75 and 150 meters. The graph of the relative CO 2 values were usually similar to the radon graphs but in some cases, the anomalous values were seen as a simple single peak and corresponded to the 141 meter sampling point. While the anomalous values of Hydrogen were usually in a form of a single peak that corresponded with 141 meter sampling point. Only a few times were values for Hydrogen higher than 100 ppm and detected at most of the sampling points, usually only one or two points resulted with small values near the 141 meter sampling point. Based on the radon values alone, we would have to conclude that the fault probably lies near or between the 116 and 141 meter sampling points, but with the additional data of the CO 2 and H 2 soil-gases one could say that the fault is probably near the 141 meter sampling point. Thus, we have

  15. EXPERIMENT BASED FAULT DIAGNOSIS ON BOTTLE FILLING PLANT WITH LVQ ARTIFICIAL NEURAL NETWORK ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mustafa DEMETGÜL

    2008-01-01

    Full Text Available In this study, an artificial neural network is developed to find an error rapidly on pneumatic system. Also the ANN prevents the system versus the failure. The error on the experimental bottle filling plant can be defined without any interference using analog values taken from pressure sensors and linear potentiometers. The sensors and potentiometers are placed on different places of the plant. Neural network diagnosis faults on plant, where no bottle, cap closing cylinder B is not working, bottle cap closing cylinder C is not working, air pressure is not sufficient, water is not filling and low air pressure faults. The fault is diagnosed by artificial neural network with LVQ. It is possible to find an failure by using normal programming or PLC. The reason offing Artificial Neural Network is to give a information where the fault is. However, ANN can be used for different systems. The aim is to find the fault by using ANN simultaneously. In this situation, the error taken place on the pneumatic system is collected by a data acquisition card. It is observed that the algorithm is very capable program for many industrial plants which have mechatronic systems.

  16. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  17. Algorithms and programs for evaluating fault trees with multi-state components

    International Nuclear Information System (INIS)

    Wickenhaeuser, A.

    1989-07-01

    Part 1 and 2 of the report contain a summary overview of methods and algorithms for the solution of fault tree analysis problems. The following points are treated in detail: Treatment of fault tree components with more than two states. Acceleration of the solution algorithms. Decomposition and modularization of extensive systems. Calculation of the structural function and the exact occurrence probability. Treatment of statistical dependencies. A flexible tool to be employed in solving these problems is the method of forming Boolean variables with restrictions. In this way, components with more than two states can be treated, the possibilities of forming modules expanded, and statistical dependencies treated. Part 3 contains descriptions of the MUSTAFA, MUSTAMO, PASPI, and SIMUST computer programs based on these methods. (orig./HP) [de

  18. The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    He Yan

    2016-01-01

    Full Text Available The particle swarm optimization (PSO is an optimization algorithm based on intelligent optimization. Parameters selection of PSO will play an important role in performance and efficiency of the algorithm. In this paper, the performance of PSO is analyzed when the control parameters vary, including particle number, accelerate constant, inertia weight and maximum limited velocity. And then PSO with dynamic parameters has been applied on the neural network training for gearbox fault diagnosis, the results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed to improve the performance of PSO.

  19. An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis

    Directory of Open Access Journals (Sweden)

    Qihang Wang

    2016-07-01

    Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.

  20. Precise Relative Location of San Andreas Fault Tremors Near Cholame, CA, Using Seismometer Clusters: Slip on the Deep Extension of the Fault?

    Science.gov (United States)

    Shelly, D. R.; Ellsworth, W. L.; Ryberg, T.; Haberland, C.; Fuis, G.; Murphy, J.; Nadeau, R.; Bürgmann, R.

    2008-12-01

    Non-volcanic tremor, similar in character to that generated at some subduction zones, was recently identified beneath the strike-slip San Andreas Fault (SAF) in central California (Nadeau and Dolenc, 2005). Using a matched filter method, we closely examine a 24-hour period of active SAF tremor and show that, like tremor in the Nankai Trough subduction zone, this tremor is composed of repeated similar events. We take advantage of this similarity to locate detected similar events relative to several chosen events. While low signal-to-noise makes location challenging, we compensate for this by estimating event-pair differential times at 'clusters' of nearby temporary and permanent stations rather than at single stations. We find that the relative locations consistently form a near-linear structure in map view, striking parallel to the surface trace of the SAF. Therefore, we suggest that at least a portion of the tremor occurs on the deep extension of the fault, similar to the situation for subduction zone tremor. Also notable is the small depth range (a few hundred meters or less) of many of the located tremors, a feature possibly analogous to earthquake streaks observed on the shallower portion of the fault. The close alignment of the tremor with the SAF slip orientation suggests a shear slip mechanism, as has been argued for subduction tremor. At times, we observe a clear migration of the tremor source along the fault, at rates of 15-40 km/hr.

  1. A Location-Aware Vertical Handoff Algorithm for Hybrid Networks

    KAUST Repository

    Mehbodniya, Abolfazl

    2010-07-01

    One of the main objectives of wireless networking is to provide mobile users with a robust connection to different networks so that they can move freely between heterogeneous networks while running their computing applications with no interruption. Horizontal handoff, or generally speaking handoff, is a process which maintains a mobile user\\'s active connection as it moves within a wireless network, whereas vertical handoff (VHO) refers to handover between different types of networks or different network layers. Optimizing VHO process is an important issue, required to reduce network signalling and mobile device power consumption as well as to improve network quality of service (QoS) and grade of service (GoS). In this paper, a VHO algorithm in multitier (overlay) networks is proposed. This algorithm uses pattern recognition to estimate user\\'s position, and decides on the handoff based on this information. For the pattern recognition algorithm structure, the probabilistic neural network (PNN) which has considerable simplicity and efficiency over existing pattern classifiers is used. Further optimization is proposed to improve the performance of the PNN algorithm. Performance analysis and comparisons with the existing VHO algorithm are provided and demonstrate a significant improvement with the proposed algorithm. Furthermore, incorporating the proposed algorithm, a structure is proposed for VHO from the medium access control (MAC) layer point of view. © 2010 ACADEMY PUBLISHER.

  2. ALGORITHMIZATION OF PROBLEMS FOR OPTIMAL LOCATION OF TRANSFORMERS IN SUBSTATIONS OF DISTRIBUTED NETWORKS

    Directory of Open Access Journals (Sweden)

    M. I. Fursanov

    2014-01-01

    Full Text Available This article reflects algorithmization of search methods of effective replacement of consumer transformers in distributed electrical networks. As any electrical equipment of power systems, power transformers have their own limited service duration, which is determined by natural processes of materials degradation and also by unexpected wear under different conditions of overload and overvoltage. According to the standards, adapted by in the Republic of Belarus, rated service life of power transformers is 25 years. But it can be situations that transformers should be better changed till this time – economically efficient. The possibility of such replacement is considered in order to increase efficiency of electrical network operation connected with its physical wear and aging.In this article the faults of early developed mathematical models of transformers replacement were discussed. Early such worked out transformers were not used. But in practice they can be replaced in one substation but they can be successfully used  in other substations .Especially if there are limits of financial resources and the replacement needs more detail technical and economical basis.During the research the authors developed the efficient algorithm for determining of optimal location of transformers at substations of distributed electrical networks, based on search of the best solution from all sets of displacement in oriented graph. Suggested algorithm allows considerably reduce design time of optimal placement of transformers using a set of simplifications. The result of algorithm’s work is series displacement of transformers in networks, which allow obtain a great economic effect in comparison with replacement of single transformer.

  3. A hybrid nested partitions algorithm for banking facility location problems

    KAUST Repository

    Xia, Li; Yin, Wenjun; Dong, Jin; Wu, Teresa; Xie, Ming; Zhao, Yanjia

    2010-01-01

    The facility location problem has been studied in many industries including banking network, chain stores, and wireless network. Maximal covering location problem (MCLP) is a general model for this type of problems. Motivated by a real-world banking

  4. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. An Approximation Algorithm for the Facility Location Problem with Lexicographic Minimax Objective

    Directory of Open Access Journals (Sweden)

    Ľuboš Buzna

    2014-01-01

    Full Text Available We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve the p-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm.

  6. An adaptive Phase-Locked Loop algorithm for faster fault ride through performance of interconnected renewable energy sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2013-01-01

    Interconnected renewable energy sources require fast and accurate fault ride through operation in order to support the power grid when faults occur. This paper proposes an adaptive Phase-Locked Loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response of the grid...... side converter control of a renewable energy source, especially under fault ride through operation. The adaptive dαβPLL is based on modifying the control parameters of the dαβPLL according to the type and voltage characteristic of the grid fault with the purpose of accelerating the performance...

  7. On a rational stopping rule for facilities location algorithms

    DEFF Research Database (Denmark)

    Juel, Henrik

    1984-01-01

    In the multifacility location problem, a number of new facilities are to be located so as to minimize a sum of weighted distances. Love and Yeong (1981) developed a lower bound on the optimal value for use in deciding when to stop an iterative solution procedure. The authors develop a stronger...

  8. Branch and peg algorithms for the simple plant location problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.; Sierksma, G.

    The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem often appears as a subproblem in other combinatorial problems. Several

  9. Branch and peg algorithms for the simple plant location problem

    NARCIS (Netherlands)

    Goldengorin, Boris; Ghosh, Diptesh; Sierksma, Gerard

    2001-01-01

    The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem of ten appears as a subproblem in other combinatorial problems. Several

  10. Application of the Goertzel’s algorithm in the airgap mixed eccentricity fault detection

    Directory of Open Access Journals (Sweden)

    Reljić Dejan

    2015-01-01

    Full Text Available In this paper, a suitable method for the on-line detection of the airgap mixed eccentricity fault in a three-phase cage induction motor has been proposed. The method is based on a Motor Current Signature Analysis (MCSA approach, a technique that is often used for an induction motor condition monitoring and fault diagnosis. It is based on the spectral analysis of the stator line current signal and the frequency identification of specific components, which are created as a result of motor faults. The most commonly used method for the current signal spectral analysis is based on the Fast Fourier transform (FFT. However, due to the complexity and memory demands, the FFT algorithm is not always suitable for real-time systems. Instead of the whole spectrum analysis, this paper suggests only the spectral analysis on the expected airgap fault frequencies employing the Goertzel’s algorithm to predict the magnitude of these frequency components. The method is simple and can be implemented in real-time airgap mixed eccentricity monitoring systems without much computational effort. A low-cost data acquisition system, supported by the LabView software, has been used for the hardware and software implementation of the proposed method. The method has been validated by the laboratory experiments on both the line-connected and the inverter-fed three-phase fourpole cage induction motor operated at the rated frequency and under constant load at a few different values. In addition, the results of the proposed method have been verified through the motor’s vibration signal analysis. [Projekat Ministarstva nauke Republike Srbije, br. III42004

  11. Integrated geophysical investigations in a fault zone located on southwestern part of İzmir city, Western Anatolia, Turkey

    Science.gov (United States)

    Drahor, Mahmut G.; Berge, Meriç A.

    2017-01-01

    Integrated geophysical investigations consisting of joint application of various geophysical techniques have become a major tool of active tectonic investigations. The choice of integrated techniques depends on geological features, tectonic and fault characteristics of the study area, required resolution and penetration depth of used techniques and also financial supports. Therefore, fault geometry and offsets, sediment thickness and properties, features of folded strata and tectonic characteristics of near-surface sections of the subsurface could be thoroughly determined using integrated geophysical approaches. Although Ground Penetrating Radar (GPR), Electrical Resistivity Tomography (ERT) and Seismic Refraction Tomography (SRT) methods are commonly used in active tectonic investigations, other geophysical techniques will also contribute in obtaining of different properties in the complex geological environments of tectonically active sites. In this study, six different geophysical methods used to define faulting locations and characterizations around the study area. These are GPR, ERT, SRT, Very Low Frequency electromagnetic (VLF), magnetics and self-potential (SP). Overall integrated geophysical approaches used in this study gave us commonly important results about the near surface geological properties and faulting characteristics in the investigation area. After integrated interpretations of geophysical surveys, we determined an optimal trench location for paleoseismological studies. The main geological properties associated with faulting process obtained after trenching studies. In addition, geophysical results pointed out some indications concerning the active faulting mechanism in the area investigated. Consequently, the trenching studies indicate that the integrated approach of geophysical techniques applied on the fault problem reveals very useful and interpretative results in description of various properties of faulting zone in the investigation site.

  12. Location capability of a sparse regional network (RSTN) using a multi-phase earthquake location algorithm (REGLOC)

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.

    1994-01-01

    The Regional Seismic Test Network (RSTN) was deployed by the US Department of Energy (DOE) to determine whether data recorded by a regional network could be used to detect and accurately locate seismic events that might be clandestine nuclear tests. The purpose of this paper is to evaluate the location capability of the RSTN. A major part of this project was the development of the location algorithm REGLOC and application of Basian a prior statistics for determining the accuracy of the location estimates. REGLOC utilizes all identifiable phases, including backazimuth, in the location. Ninty-four events, distributed throughout the network area, detected by both the RSTN and located by local networks were used in the study. The location capability of the RSTN was evaluated by estimating the location accuracy, error ellipse accuracy, and the percentage of events that could be located, as a function of magnitude. The location accuracy was verified by comparing the RSTN results for the 94 events with published locations based on data from the local networks. The error ellipse accuracy was evaluated by determining whether the error ellipse includes the actual location. The percentage of events located was assessed by combining detection capability with location capability to determine the percentage of events that could be located within the study area. Events were located with both an average crustal model for the entire region, and with regional velocity models along with station corrections obtained from master events. Most events with a magnitude <3.0 can only be located with arrivals from one station. Their average location errors are 453 and 414 km for the average- and regional-velocity model locations, respectively. Single station locations are very unreliable because they depend on accurate backazimuth estimates, and backazimuth proved to be a very unreliable computation.

  13. Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations

    Science.gov (United States)

    Dokuz, A. S.; Celik, M.

    2017-11-01

    Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  14. FAST SS-ILM: A COMPUTATIONALLY EFFICIENT ALGORITHM TO DISCOVER SOCIALLY IMPORTANT LOCATIONS

    Directory of Open Access Journals (Sweden)

    A. S. Dokuz

    2017-11-01

    Full Text Available Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.

  15. Mobility-Assisted on-Demand Routing Algorithm for MANETs in the Presence of Location Errors

    Directory of Open Access Journals (Sweden)

    Trung Kien Vu

    2014-01-01

    Full Text Available We propose a mobility-assisted on-demand routing algorithm for mobile ad hoc networks in the presence of location errors. Location awareness enables mobile nodes to predict their mobility and enhances routing performance by estimating link duration and selecting reliable routes. However, measured locations intrinsically include errors in measurement. Such errors degrade mobility prediction and have been ignored in previous work. To mitigate the impact of location errors on routing, we propose an on-demand routing algorithm taking into account location errors. To that end, we adopt the Kalman filter to estimate accurate locations and consider route confidence in discovering routes. Via simulations, we compare our algorithm and previous algorithms in various environments. Our proposed mobility prediction is robust to the location errors.

  16. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  17. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  18. A New Fault Location Approach for Acoustic Emission Techniques in Wind Turbines

    Directory of Open Access Journals (Sweden)

    Carlos Quiterio Gómez Muñoz

    2016-01-01

    Full Text Available The renewable energy industry is undergoing continuous improvement and development worldwide, wind energy being one of the most relevant renewable energies. This industry requires high levels of reliability, availability, maintainability and safety (RAMS for wind turbines. The blades are critical components in wind turbines. The objective of this research work is focused on the fault detection and diagnosis (FDD of the wind turbine blades. The FDD approach is composed of a robust condition monitoring system (CMS and a novel signal processing method. CMS collects and analyses the data from different non-destructive tests based on acoustic emission. The acoustic emission signals are collected applying macro-fiber composite (MFC sensors to detect and locate cracks on the surface of the blades. Three MFC sensors are set in a section of a wind turbine blade. The acoustic emission signals are generated by breaking a pencil lead in the blade surface. This method is used to simulate the acoustic emission due to a breakdown of the composite fibers. The breakdown generates a set of mechanical waves that are collected by the MFC sensors. A graphical method is employed to obtain a system of non-linear equations that will be used for locating the emission source. This work demonstrates that a fiber breakage in the wind turbine blade can be detected and located by using only three low cost sensors. It allows the detection of potential failures at an early stages, and it can also reduce corrective maintenance tasks and downtimes and increase the RAMS of the wind turbine.

  19. A novel fault location scheme for power distribution system based on injection method and transient line voltage

    Science.gov (United States)

    Huang, Yuehua; Li, Xiaomin; Cheng, Jiangzhou; Nie, Deyu; Wang, Zhuoyuan

    2018-02-01

    This paper presents a novel fault location method by injecting travelling wave current. The new methodology is based on Time Difference Of Arrival(TDOA)measurement which is available measurements the injection point and the end node of main radial. In other words, TDOA is the maximum correlation time when the signal reflected wave crest of the injected and fault appear simultaneously. Then distance calculation is equal to the wave velocity multiplied by TDOA. Furthermore, in case of some transformers connected to the end of the feeder, it’s necessary to combine with the transient voltage comparison of amplitude. Finally, in order to verify the effectiveness of this method, several simulations have been undertaken by using MATLAB/SIMULINK software packages. The proposed fault location is useful to short the positioning time in the premise of ensuring the accuracy, besides the error is 5.1% and 13.7%.

  20. Design and development of an automated D.C. ground fault detection and location system for Cirus

    International Nuclear Information System (INIS)

    Marik, S.K.; Ramesh, N.; Jain, J.K.; Srivastava, A.P.

    2002-01-01

    Full text: The original design of Cirus safety system provided for automatic detection of ground fault in class I D.C. power supply system and its annunciation followed by delayed reactor trip. Identification of a faulty section was required to be done manually by switching off various sections one at a time thus requiring a lot of shutdown time to identify the faulty section. Since class I power supply is provided for safety control system, quick detection and location of ground faults in this supply is necessary as these faults have potential to bypass safety interlocks and hence the need for a new system for automatic location of a faulty section. Since such systems are not readily available in the market, in-house efforts were made to design and develop a plant-specific system, which has been installed and commissioned

  1. Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Song

    2014-01-01

    Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.

  2. Gravity Field Interpretation for Major Fault Depth Detection in a Region Located SW- Qa’im / Iraq

    Directory of Open Access Journals (Sweden)

    Wadhah Mahmood Shakir Al-Khafaji

    2017-09-01

    Full Text Available This research deals with the qualitative and quantitative interpretation of Bouguer gravity anomaly data for a region located to the SW of Qa’im City within Anbar province by using 2D- mapping methods. The gravity residual field obtained graphically by subtracting the Regional Gravity values from the values of the total Bouguer anomaly. The residual gravity field processed in order to reduce noise by applying the gradient operator and 1st directional derivatives filtering. This was helpful in assigning the locations of sudden variation in Gravity values. Such variations may be produced by subsurface faults, fractures, cavities or subsurface facies lateral variations limits. A major fault was predicted to extend with the direction NE-SW. This fault is mentioned by previous studies as undefined subsurface fault depth within the sedimentary cover rocks. The results of this research that were obtained by gravity quantitative interpretation find that the depth to this major fault plane center is about 2.4 Km.

  3. AIC-based diffraction stacking for local earthquake locations at the Sumatran Fault (Indonesia)

    Science.gov (United States)

    Hendriyana, Andri; Bauer, Klaus; Muksin, Umar; Weber, Michael

    2018-05-01

    We present a new workflow for the localization of seismic events which is based on a diffraction stacking approach. In order to address the effects from complex source radiation patterns, we suggest to compute diffraction stacking from a characteristic function (CF) instead of stacking the original waveform data. A new CF, which is called in the following mAIC (modified from Akaike Information Criterion) is proposed. We demonstrate that both P- and S-wave onsets can be detected accurately. To avoid cross-talk between P and S waves due to inaccurate velocity models, we separate the P and S waves from the mAIC function by making use of polarization attributes. Then, the final image function is represented by the largest eigenvalue as a result of the covariance analysis between P- and S-image functions. Results from synthetic experiments show that the proposed diffraction stacking provides reliable results. The workflow of the diffraction stacking method was finally applied to local earthquake data from Sumatra, Indonesia. Recordings from a temporary network of 42 stations deployed for nine months around the Tarutung pull-apart basin were analysed. The seismic event locations resulting from the diffraction stacking method align along a segment of the Sumatran Fault. A more complex distribution of seismicity is imaged within and around the Tarutung basin. Two lineaments striking N-S were found in the centre of the Tarutung basin which support independent results from structural geology.

  4. Fault location method for unexposed gas trunk line insulation at stray current constant effect area

    Science.gov (United States)

    Tsenev, A. N.; Nosov, V. V.; Akimova, E. V.

    2017-10-01

    For the purpose of gas trunk lines safe operation, two types of pipe wall metal anticorrosion protection are generally used - the passive (insulation coating) protection and the active (electrochemical) protection. In the process of a pipeline long-term operation, its insulation is subject to wear and damage. Electrochemical protection means of a certain potential value prevent metal dissolution in the soil. When insulation wear and tear attains a level of insufficiency of the protection potential value, the insulating coating needs repair which is a labor-consuming procedure. To reduce the risk of such situation, it is necessary to make inspection rounds to monitor the condition of pipe insulation. A method for pipeline insulation coating unexposed fault location based on Pearson method is considered, wherein a working cathodic protection station signal of 100 Hz frequency is used, which makes installation of a generator unnecessary, and also a specific generator signal of 1 kHz frequency is used at high noise immunity and sensitivity of the instrument complex. This method enables detection and sizing of unexposed pipeline defects within the zones of earth current permanent action. High noise immunity of selective indicators allows for operation in proximity to 110 kV, 220 kV, and 500 kV power transmission lines in action.

  5. Using the time domain reflectometer to check for a locate a fault

    International Nuclear Information System (INIS)

    Ramphal, M.; Sadok, E.

    1995-01-01

    The Time Domain Reflectometer (TDR) is one of the most useful tools for finding cable faults (opens, shorts, bad cable splices). The TDR is connected to the end of the line and shows the distance to the fault. It uses a low voltage signal that will not damage the line or interfere with nearby lines. The TDR sends a pulse or energy down the cable under test; when the pulse encounters the end of the cable or any cable fault, a portion of the pulse energy is reflected. The elapsed time of the reflected pulse is and indication of the distance to the fault. The shape of the reflected pulse uniquely identifies the type of cable fault. (author)

  6. Brake fault diagnosis using Clonal Selection Classification Algorithm (CSCA – A statistical learning approach

    Directory of Open Access Journals (Sweden)

    R. Jegadeeshwaran

    2015-03-01

    Full Text Available In automobile, brake system is an essential part responsible for control of the vehicle. Any failure in the brake system impacts the vehicle's motion. It will generate frequent catastrophic effects on the vehicle cum passenger's safety. Thus the brake system plays a vital role in an automobile and hence condition monitoring of the brake system is essential. Vibration based condition monitoring using machine learning techniques are gaining momentum. This study is one such attempt to perform the condition monitoring of a hydraulic brake system through vibration analysis. In this research, the performance of a Clonal Selection Classification Algorithm (CSCA for brake fault diagnosis has been reported. A hydraulic brake system test rig was fabricated. Under good and faulty conditions of a brake system, the vibration signals were acquired using a piezoelectric transducer. The statistical parameters were extracted from the vibration signal. The best feature set was identified for classification using attribute evaluator. The selected features were then classified using CSCA. The classification accuracy of such artificial intelligence technique has been compared with other machine learning approaches and discussed. The Clonal Selection Classification Algorithm performs better and gives the maximum classification accuracy (96% for the fault diagnosis of a hydraulic brake system.

  7. Protection algorithm for a wind turbine generator based on positive- and negative-sequence fault components

    DEFF Research Database (Denmark)

    Zheng, Tai-Ying; Cha, Seung-Tae; Crossley, Peter A.

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on positive- and negative-sequence fault components is proposed in the paper. The relay uses the magnitude of the positive-sequence component in the fault current to detect a fault on a parallel WTG, connected to the same power collection...... feeder, or a fault on an adjacent feeder; but for these faults, the relay remains stable and inoperative. A fault on the power collection feeder or a fault on the collection bus, both of which require an instantaneous tripping response, are distinguished from an inter-tie fault or a grid fault, which...... in the fault current is used to decide on either instantaneous or delayed operation. The operating performance of the relay is then verified using various fault scenarios modelled using EMTP-RV. The scenarios involve changes in the position and type of fault, and the faulted phases. Results confirm...

  8. Support vector machine based fault classification and location of a long transmission line

    Directory of Open Access Journals (Sweden)

    Papia Ray

    2016-09-01

    Full Text Available This paper investigates support vector machine based fault type and distance estimation scheme in a long transmission line. The planned technique uses post fault single cycle current waveform and pre-processing of the samples is done by wavelet packet transform. Energy and entropy are obtained from the decomposed coefficients and feature matrix is prepared. Then the redundant features from the matrix are taken out by the forward feature selection method and normalized. Test and train data are developed by taking into consideration variables of a simulation situation like fault type, resistance path, inception angle, and distance. In this paper 10 different types of short circuit fault are analyzed. The test data are examined by support vector machine whose parameters are optimized by particle swarm optimization method. The anticipated method is checked on a 400 kV, 300 km long transmission line with voltage source at both the ends. Two cases were examined with the proposed method. The first one is fault very near to both the source end (front and rear and the second one is support vector machine with and without optimized parameter. Simulation result indicates that the anticipated method for fault classification gives high accuracy (99.21% and least fault distance estimation error (0.29%.

  9. A Diagnosis Method for Rotation Machinery Faults Based on Dimensionless Indexes Combined with K-Nearest Neighbor Algorithm

    Directory of Open Access Journals (Sweden)

    Jianbin Xiong

    2015-01-01

    Full Text Available It is difficult to well distinguish the dimensionless indexes between normal petrochemical rotating machinery equipment and those with complex faults. When the conflict of evidence is too big, it will result in uncertainty of diagnosis. This paper presents a diagnosis method for rotation machinery fault based on dimensionless indexes combined with K-nearest neighbor (KNN algorithm. This method uses a KNN algorithm and an evidence fusion theoretical formula to process fuzzy data, incomplete data, and accurate data. This method can transfer the signals from the petrochemical rotating machinery sensors to the reliability manners using dimensionless indexes and KNN algorithm. The input information is further integrated by an evidence synthesis formula to get the final data. The type of fault will be decided based on these data. The experimental results show that the proposed method can integrate data to provide a more reliable and reasonable result, thereby reducing the decision risk.

  10. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  11. Accuracy of algorithms to predict accessory pathway location in children with Wolff-Parkinson-White syndrome.

    Science.gov (United States)

    Wren, Christopher; Vogel, Melanie; Lord, Stephen; Abrams, Dominic; Bourke, John; Rees, Philip; Rosenthal, Eric

    2012-02-01

    The aim of this study was to examine the accuracy in predicting pathway location in children with Wolff-Parkinson-White syndrome for each of seven published algorithms. ECGs from 100 consecutive children with Wolff-Parkinson-White syndrome undergoing electrophysiological study were analysed by six investigators using seven published algorithms, six of which had been developed in adult patients. Accuracy and concordance of predictions were adjusted for the number of pathway locations. Accessory pathways were left-sided in 49, septal in 20 and right-sided in 31 children. Overall accuracy of prediction was 30-49% for the exact location and 61-68% including adjacent locations. Concordance between investigators varied between 41% and 86%. No algorithm was better at predicting septal pathways (accuracy 5-35%, improving to 40-78% including adjacent locations), but one was significantly worse. Predictive accuracy was 24-53% for the exact location of right-sided pathways (50-71% including adjacent locations) and 32-55% for the exact location of left-sided pathways (58-73% including adjacent locations). All algorithms were less accurate in our hands than in other authors' own assessment. None performed well in identifying midseptal or right anteroseptal accessory pathway locations.

  12. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    Science.gov (United States)

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  13. An automatic procedure for high-resolution earthquake locations: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    Science.gov (United States)

    Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Latorre, Diana; Piccinini, Davide

    2014-05-01

    The characterization of the geometry, kinematics and rheology of fault zones by seismological data depends on our capability of accurately locate the largest number of low-magnitude seismic events. To this aim, we have been working for the past three years to develop an advanced modular earthquake location procedure able to automatically retrieve high-resolution earthquakes catalogues directly from continuous waveforms data. We use seismograms recorded at about 60 seismic stations located both at surface and at depth. The network covers an area of about 80x60 km with a mean inter-station distance of 6 km. These stations are part of a Near fault Observatory (TABOO; http://taboo.rm.ingv.it/), consisting of multi-sensor stations (seismic, geodetic, geochemical and electromagnetic). This permanent scientific infrastructure managed by the INGV is devoted to studying the earthquakes preparatory phase and the fast/slow (i.e., seismic/aseismic) deformation process active along the Alto Tiberina fault (ATF) located in the northern Apennines (Italy). The ATF is potentially one of the rare worldwide examples of active low-angle (picking procedure that provides consistently weighted P- and S-wave arrival times, P-wave first motion polarities and the maximum waveform amplitude for local magnitude calculation; iii) both linearized iterative and non-linear global-search earthquake location algorithms to compute accurate absolute locations of single-events in a 3D geological model (see Latorre et al. same session); iv) cross-correlation and double-difference location methods to compute high-resolution relative event locations. This procedure is now running off-line with a delay of 1 week to the real-time. We are now implementing this procedure to obtain high-resolution double-difference earthquake locations in real-time (DDRT). We show locations of ~30k low-magnitude earthquakes recorded during the past 4 years (2010-2013) of network operation, reaching a completeness magnitude of

  14. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  15. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    Science.gov (United States)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  16. An improved cut-and-solve algorithm for the single-source capacitated facility location problem

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Klose, Andreas; Nielsen, Lars Relund

    2018-01-01

    In this paper, we present an improved cut-and-solve algorithm for the single-source capacitated facility location problem. The algorithm consists of three phases. The first phase strengthens the integer program by a cutting plane algorithm to obtain a tight lower bound. The second phase uses a two......-level local branching heuristic to find an upper bound, and if optimality has not yet been established, the third phase uses the cut-and-solve framework to close the optimality gap. Extensive computational results are reported, showing that the proposed algorithm runs 10–80 times faster on average compared...

  17. Redundant and fault-tolerant algorithms for real-time measurement and control systems for weapon equipment.

    Science.gov (United States)

    Li, Dan; Hu, Xiaoguang

    2017-03-01

    Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Approximation algorithms for facility location problems with a special class of subadditive cost functions

    NARCIS (Netherlands)

    Gabor, A.F.; Ommeren, van J.C.W.

    2006-01-01

    In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present three facility location problems with stochastic demand and exponential servers, respectively inventory. We present a (1+e,1)-reduction of the facility

  19. Approximation algorithms for facility location problems with a special class of subadditive cost functions

    NARCIS (Netherlands)

    Gabor, Adriana F.; van Ommeren, Jan C.W.

    2006-01-01

    In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present three facility location problems with stochastic demand and exponential servers, respectively inventory. We present a $(1+\\varepsilon, 1)$-reduction of

  20. Approximation algorithms for facility location problems with discrete subadditive cost functions

    NARCIS (Netherlands)

    Gabor, A.F.; van Ommeren, Jan C.W.

    2005-01-01

    In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present two facility location problems with stochastic demand and exponential servers, respectively inventory. We present a $(1+\\epsilon,1)$- reduction of the

  1. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  2. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid

    2017-05-12

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  3. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid; Jardak, Seifallah; Alouini, Mohamed-Slim

    2017-01-01

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  4. Analysis of lightning fault detection, location and protection on short and long transmission lines using Real Time Digital Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Andre Luiz Pereira de [Siemens Ltda., Sao Paulo, SP (Brazil)], E-mail: andreluiz.oliveira@siemens.com

    2007-07-01

    The purpose of this paper is to present an analysis of lightning fault detection, location and protection using numeric distance relays applied in high voltage transmission lines, more specifically in the 500 kV transmission lines of CEMIG (Brazilian Energy Utility) between the Vespasiano 2 - Neves 1 (short line - 23.9 km) and Vespasiano 2 - Mesquita (long line - 148.6 km) substations. The analysis was based on the simulations results of numeric distance protective relays on power transmission lines, realized in September 02 to 06, 2002, at Siemens AG's facilities (Erlangen - Germany), using Real Time Digital Simulator (RTDS{sup TM}). Several lightning faults simulations were accomplished, in several conditions of the electrical power system where the protective relays would be installed. The results are presented not only with the times of lightning faults elimination, but also all the functionality of a protection system, including the correct detection, location and other advantages that these modern protection devices make possible to the power system. (author)

  5. Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs

    Directory of Open Access Journals (Sweden)

    Enrico Creaco

    2018-04-01

    Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.

  6. Methods for locating ground faults and insulation degradation condition in energy conversion systems

    Science.gov (United States)

    Agamy, Mohamed; Elasser, Ahmed; Galbraith, Anthony William; Harfman Todorovic, Maja

    2015-08-11

    Methods for determining a ground fault or insulation degradation condition within energy conversion systems are described. A method for determining a ground fault within an energy conversion system may include, in part, a comparison of baseline waveform of differential current to a waveform of differential current during operation for a plurality of DC current carrying conductors in an energy conversion system. A method for determining insulation degradation within an energy conversion system may include, in part, a comparison of baseline frequency spectra of differential current to a frequency spectra of differential current transient at start-up for a plurality of DC current carrying conductors in an energy conversion system. In one embodiment, the energy conversion system may be a photovoltaic system.

  7. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    Science.gov (United States)

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  8. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  9. An Endosymbiotic Evolutionary Algorithm for the Hub Location-Routing Problem

    Directory of Open Access Journals (Sweden)

    Ji Ung Sun

    2015-01-01

    Full Text Available We consider a capacitated hub location-routing problem (HLRP which combines the hub location problem and multihub vehicle routing decisions. The HLRP not only determines the locations of the capacitated p-hubs within a set of potential hubs but also deals with the routes of the vehicles to meet the demands of customers. This problem is formulated as a 0-1 mixed integer programming model with the objective of the minimum total cost including routing cost, fixed hub cost, and fixed vehicle cost. As the HLRP has impractically demanding for the large sized problems, we develop a solution method based on the endosymbiotic evolutionary algorithm (EEA which solves hub location and vehicle routing problem simultaneously. The performance of the proposed algorithm is examined through a comparative study. The experimental results show that the proposed EEA can be a viable solution method for the supply chain network planning.

  10. Fault Diagnosis of Plunger Pump in Truck Crane Based on Relevance Vector Machine with Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Wenliao Du

    2013-01-01

    Full Text Available Promptly and accurately dealing with the equipment breakdown is very important in terms of enhancing reliability and decreasing downtime. A novel fault diagnosis method PSO-RVM based on relevance vector machines (RVM with particle swarm optimization (PSO algorithm for plunger pump in truck crane is proposed. The particle swarm optimization algorithm is utilized to determine the kernel width parameter of the kernel function in RVM, and the five two-class RVMs with binary tree architecture are trained to recognize the condition of mechanism. The proposed method is employed in the diagnosis of plunger pump in truck crane. The six states, including normal state, bearing inner race fault, bearing roller fault, plunger wear fault, thrust plate wear fault, and swash plate wear fault, are used to test the classification performance of the proposed PSO-RVM model, which compared with the classical models, such as back-propagation artificial neural network (BP-ANN, ant colony optimization artificial neural network (ANT-ANN, RVM, and support vectors, machines with particle swarm optimization (PSO-SVM, respectively. The experimental results show that the PSO-RVM is superior to the first three classical models, and has a comparative performance to the PSO-SVM, the corresponding diagnostic accuracy achieving as high as 99.17% and 99.58%, respectively. But the number of relevance vectors is far fewer than that of support vector, and the former is about 1/12–1/3 of the latter, which indicates that the proposed PSO-RVM model is more suitable for applications that require low complexity and real-time monitoring.

  11. Locating non-volcanic tremor along the San Andreas Fault using a multiple array source imaging technique

    Science.gov (United States)

    Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.

    2010-01-01

    Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.

  12. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A

    2011-01-01

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  13. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, M; Gu, F; Ball, A, E-mail: M.Ahmed@hud.ac.uk [Diagnostic Engineering Research Group, University of Huddersfield, HD1 3DH (United Kingdom)

    2011-07-19

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  14. Location-Based Self-Adaptive Routing Algorithm for Wireless Sensor Networks in Home Automation

    Directory of Open Access Journals (Sweden)

    Hong SeungHo

    2011-01-01

    Full Text Available The use of wireless sensor networks in home automation (WSNHA is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route discovery flooding to a cylindrical request zone, which reduces the routing overhead and decreases broadcast storm problems in the MAC layer. It also automatically adjusts the size of the request zone using a self-adaptive algorithm based on Bayes' theorem. This makes WSNHA-LBAR more adaptable to the changes of the network state and easier to implement. Simulation results show improved network reliability as well as reduced routing overhead.

  15. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    Science.gov (United States)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  16. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  17. Faults detection approach using PCA and SOM algorithm in PMSG-WT system

    Directory of Open Access Journals (Sweden)

    Mohamed Lamine FADDA

    2016-07-01

    Full Text Available In this paper, a new approach for faults detection in observable data system wind turbine - permanent magnet synchronous generator (WT-PMSG, the studying objective, illustrate the combination (SOM-PCA to build Multi-local-PCA models faults detection in system (WT-PMSG, the performance of the method suggested to faults detection in system data, finding good results in simulation experiment.

  18. A Cubature-Principle-Assisted IMM-Adaptive UKF Algorithm for Maneuvering Target Tracking Caused by Sensor Faults

    Directory of Open Access Journals (Sweden)

    Huan Zhou

    2017-09-01

    Full Text Available Aimed at solving the problem of decreased filtering precision while maneuvering target tracking caused by non-Gaussian distribution and sensor faults, we developed an efficient interacting multiple model-unscented Kalman filter (IMM-UKF algorithm. By dividing the IMM-UKF into two links, the algorithm introduces the cubature principle to approximate the probability density of the random variable, after the interaction, by considering the external link of IMM-UKF, which constitutes the cubature-principle-assisted IMM method (CPIMM for solving the non-Gaussian problem, and leads to an adaptive matrix to balance the contribution of the state. The algorithm provides filtering solutions by considering the internal link of IMM-UKF, which is called a new adaptive UKF algorithm (NAUKF to address sensor faults. The proposed CPIMM-NAUKF is evaluated in a numerical simulation and two practical experiments including one navigation experiment and one maneuvering target tracking experiment. The simulation and experiment results show that the proposed CPIMM-NAUKF has greater filtering precision and faster convergence than the existing IMM-UKF. The proposed algorithm achieves a very good tracking performance, and will be effective and applicable in the field of maneuvering target tracking.

  19. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  20. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    Science.gov (United States)

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  1. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2017-06-01

    Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  2. Development of Data Processing Algorithms for the Upgraded LHCb Vertex Locator

    CERN Document Server

    AUTHOR|(CDS)2101352

    The LHCb detector will see a major upgrade during LHC Long Shutdown II, which is planned for 2019/20. The silicon Vertex Locator subdetector will be upgraded for operation under the new run conditions. The detector will be read out using a data acquisition board based on an FPGA. The work presented in this thesis is concerned with the development of the data processing algorithms to be used in this data acquisition board. In particular, work in three different areas of the FPGA is covered: the data processing block, the low level interface, and the post router block. The algorithms produced have been simulated and tested, and shown to provide the required performance. Errors in the initial implementation of the Gigabit Wireline Transmitter serialized data in the low level interface were discovered and corrected. The data scrambling algorithm and the post router block have been incorporated in the front end readout chip.

  3. Actuator Location and Voltages Optimization for Shape Control of Smart Beams Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Georgios E. Stavroulakis

    2013-10-01

    Full Text Available This paper presents a numerical study on optimal voltages and optimal placement of piezoelectric actuators for shape control of beam structures. A finite element model, based on Timoshenko beam theory, is developed to characterize the behavior of the structure and the actuators. This model accounted for the electromechanical coupling in the entire beam structure, due to the fact that the piezoelectric layers are treated as constituent parts of the entire structural system. A hybrid scheme is presented based on great deluge and genetic algorithm. The hybrid algorithm is implemented to calculate the optimal locations and optimal values of voltages, applied to the piezoelectric actuators glued in the structure, which minimize the error between the achieved and the desired shape. Results from numerical simulations demonstrate the capabilities and efficiency of the developed optimization algorithm in both clamped−free and clamped−clamped beam problems are presented.

  4. Optimal Location and Sizing of UPQC in Distribution Networks Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Seyed Abbas Taher

    2012-01-01

    Full Text Available Differential evolution (DE algorithm is used to determine optimal location of unified power quality conditioner (UPQC considering its size in the radial distribution systems. The problem is formulated to find the optimum location of UPQC based on an objective function (OF defined for improving of voltage and current profiles, reducing power loss and minimizing the investment costs considering the OF's weighting factors. Hence, a steady-state model of UPQC is derived to set in forward/backward sweep load flow. Studies are performed on two IEEE 33-bus and 69-bus standard distribution networks. Accuracy was evaluated by reapplying the procedures using both genetic (GA and immune algorithms (IA. Comparative results indicate that DE is capable of offering a nearer global optimal in minimizing the OF and reaching all the desired conditions than GA and IA.

  5. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    Science.gov (United States)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  6. A Survey of Wireless Fair Queuing Algorithms with Location-Dependent Channel Errors

    Directory of Open Access Journals (Sweden)

    Anca VARGATU

    2011-01-01

    Full Text Available The rapid development of wireless networks has brought more and more attention to topics related to fair allocation of resources, creation of suitable algorithms, taking into account the special characteristics of wireless environment and insurance fair access to the transmission channel, with delay bound and throughput guaranteed. Fair allocation of resources in wireless networks requires significant challenges, because of errors that occur only in these networks, such as location-dependent and bursty channel errors. In wireless networks, frequently happens, because interference of radio waves, that a user experiencing bad radio conditions during a period of time, not to receive resources in that period. This paper analyzes some resource allocation algorithms for wireless networks with location dependent errors, specifying the base idea for each algorithm and the way how it works. The analyzed fair queuing algorithms differ by the way they treat the following aspects: how to select the flows which should receive additional services, how to allocate these resources, which is the proportion received by error free flows and how the flows affected by errors are compensated.

  7. Multi-Stage Feature Selection by Using Genetic Algorithms for Fault Diagnosis in Gearboxes Based on Vibration Signal

    Directory of Open Access Journals (Sweden)

    Mariela Cerrada

    2015-09-01

    Full Text Available There are growing demands for condition-based monitoring of gearboxes, and techniques to improve the reliability, effectiveness and accuracy for fault diagnosis are considered valuable contributions. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance in the diagnosis system. The main aim of this research is to propose a multi-stage feature selection mechanism for selecting the best set of condition parameters on the time, frequency and time-frequency domains, which are extracted from vibration signals for fault diagnosis purposes in gearboxes. The selection is based on genetic algorithms, proposing in each stage a new subset of the best features regarding the classifier performance in a supervised environment. The selected features are augmented at each stage and used as input for a neural network classifier in the next step, while a new subset of feature candidates is treated by the selection process. As a result, the inherent exploration and exploitation of the genetic algorithms for finding the best solutions of the selection problem are locally focused. The Sensors 2015, 15 23904 approach is tested on a dataset from a real test bed with several fault classes under different running conditions of load and velocity. The model performance for diagnosis is over 98%.

  8. Using genetic algorithms to optimise current and future health planning - the example of ambulance locations

    Directory of Open Access Journals (Sweden)

    Suzuki Hiroshi

    2010-01-01

    Full Text Available Abstract Background Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted EMS case distances to those locations. Results Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71. The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. Conclusions The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.

  9. An improved fault detection classification and location scheme based on wavelet transform and artificial neural network for six phase transmission line using single end data only.

    Science.gov (United States)

    Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit

    2015-01-01

    Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.

  10. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults

    OpenAIRE

    Rui Sun; Qi Cheng; Guanyu Wang; Washington Yotto Ochieng

    2017-01-01

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs’ flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in ...

  11. Fault Diagnosis System of Induction Motors Based on Neural Network and Genetic Algorithm Using Stator Current Signals

    Directory of Open Access Journals (Sweden)

    Tian Han

    2006-01-01

    Full Text Available This paper proposes an online fault diagnosis system for induction motors through the combination of discrete wavelet transform (DWT, feature extraction, genetic algorithm (GA, and neural network (ANN techniques. The wavelet transform improves the signal-to-noise ratio during a preprocessing. Features are extracted from motor stator current, while reducing data transfers and making online application available. GA is used to select the most significant features from the whole feature database and optimize the ANN structure parameter. Optimized ANN is trained and tested by the selected features of the measurement data of stator current. The combination of advanced techniques reduces the learning time and increases the diagnosis accuracy. The efficiency of the proposed system is demonstrated through motor faults of electrical and mechanical origins on the induction motors. The results of the test indicate that the proposed system is promising for the real-time application.

  12. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    Science.gov (United States)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  13. A hybrid algorithm for stochastic single-source capacitated facility location problem with service level requirements

    Directory of Open Access Journals (Sweden)

    Hosseinali Salemi

    2016-04-01

    Full Text Available Facility location models are observed in many diverse areas such as communication networks, transportation, and distribution systems planning. They play significant role in supply chain and operations management and are one of the main well-known topics in strategic agenda of contemporary manufacturing and service companies accompanied by long-lasting effects. We define a new approach for solving stochastic single source capacitated facility location problem (SSSCFLP. Customers with stochastic demand are assigned to set of capacitated facilities that are selected to serve them. It is demonstrated that problem can be transformed to deterministic Single Source Capacitated Facility Location Problem (SSCFLP for Poisson demand distribution. A hybrid algorithm which combines Lagrangian heuristic with adjusted mixture of Ant colony and Genetic optimization is proposed to find lower and upper bounds for this problem. Computational results of various instances with distinct properties indicate that proposed solving approach is efficient.

  14. Validation of algorithm used for location of electrodes in CT images

    International Nuclear Information System (INIS)

    Bustos, J; Graffigna, J P; Isoardi, R; Gómez, M E; Romo, R

    2013-01-01

    It has been implement a noninvasive technique to detect and delineate the focus of electric discharge in patients with mono-focal epilepsy. For the detection of these sources it has used electroencephalogram (EEG) with 128 electrodes cap. With EEG data and electrodes position, it is possible locate this focus on MR volumes. The technique locates the electrodes on CT volumes using image processing algorithms to obtain descriptors of electrodes, as centroid, which determines its position in space. Finally these points are transformed into the coordinate space of MR through a registration for a better understanding by the physician. Due to the medical implications of this technique is of utmost importance to validate the results of the detection of electrodes coordinates. For that, this paper present a comparison between the actual values measured physically (measures including electrode size and spatial location) and the values obtained in the processing of CT and MR images

  15. A multi-objective location routing problem using imperialist competitive algorithm

    Directory of Open Access Journals (Sweden)

    Amir Mohammad Golmohammadi

    2016-06-01

    Full Text Available Nowadays, most manufacturing units try to locate their requirements and the depot vehicle routing in order to transport the goods at optimum cost. Needless to mention that the locations of the required warehouses influence on the performance of vehicle routing. In this paper, a mathematical programming model to optimize the storage location and vehicle routing are presented. The first objective function of the model minimizes the total cost associated with the transportation and storage, and the second objective function minimizes the difference distance traveled by vehicles. The study uses Imperialist Competitive Algorithm (ICA to solve the resulted problems in different sizes. The preliminary results have indicated that the proposed study has performed better than NSGA-II and PAES methods in terms of Quality metric and Spacing metric.

  16. Seismic Experiment at North Arizona To Locate Washington Fault - 3D Data Interpolation

    KAUST Repository

    Hanafy, Sherif M.

    2008-10-01

    The recorded data is interpolated using sinc technique to create the following two data sets 1. Data Set # 1: Here, we interpolated only in the receiver direction to regularize the receiver interval to 1 m, however, the source locations are the same as the original data (2 and 4 m source intervals). Now the data contains 6 lines, each line has 121 receivers and a total of 240 shot gathers. 2. Data Set # 2: Here, we used the result from the previous step, and interpolated only in the shot direction to regularize the shot interval to 1 m. Now, both shot and receivers has 1 m interval. The data contains 6 lines, each line has 121 receivers and a total of 726 shot gathers.

  17. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    Science.gov (United States)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic

  18. The 2012 Emilia seismic sequence (Northern Italy): Imaging the thrust fault system by accurate aftershock location

    Science.gov (United States)

    Govoni, Aladino; Marchetti, Alessandro; De Gori, Pasquale; Di Bona, Massimo; Lucente, Francesco Pio; Improta, Luigi; Chiarabba, Claudio; Nardi, Anna; Margheriti, Lucia; Agostinetti, Nicola Piana; Di Giovambattista, Rita; Latorre, Diana; Anselmi, Mario; Ciaccio, Maria Grazia; Moretti, Milena; Castellano, Corrado; Piccinini, Davide

    2014-05-01

    Starting from late May 2012, the Emilia region (Northern Italy) was severely shaken by an intense seismic sequence, originated from a ML 5.9 earthquake on May 20th, at a hypocentral depth of 6.3 km, with thrust-type focal mechanism. In the following days, the seismic rate remained high, counting 50 ML ≥ 2.0 earthquakes a day, on average. Seismicity spreads along a 30 km east-west elongated area, in the Po river alluvial plain, in the nearby of the cities Ferrara and Modena. Nine days after the first shock, another destructive thrust-type earthquake (ML 5.8) hit the area to the west, causing further damage and fatalities. Aftershocks following this second destructive event extended along the same east-westerly trend for further 20 km to the west, thus illuminating an area of about 50 km in length, on the whole. After the first shock struck, on May 20th, a dense network of temporary seismic stations, in addition to the permanent ones, was deployed in the meizoseismal area, leading to a sensible improvement of the earthquake monitoring capability there. A combined dataset, including three-component seismic waveforms recorded by both permanent and temporary stations, has been analyzed in order to obtain an appropriate 1-D velocity model for earthquake location in the study area. Here we describe the main seismological characteristics of this seismic sequence and, relying on refined earthquakes location, we make inferences on the geometry of the thrust system responsible for the two strongest shocks.

  19. A Location-Aware Service Deployment Algorithm Based on K-Means for Cloudlets

    Directory of Open Access Journals (Sweden)

    Tyng-Yeu Liang

    2017-01-01

    Full Text Available Cloudlet recently was proposed to push data centers towards network edges for reducing the network latency of delivering cloud services to mobile devices. For the sake of user mobility, it is necessary to deploy and hand off services anytime anywhere for achieving the minimal network latency for users’ service requests. However, the cost of this solution usually is too high for service providers and is not effective for resource exploitation. To resolve this problem, we propose a location-aware service deployment algorithm based on K-means for cloudlets in this paper. Simply speaking, the proposed algorithm divides mobile devices into a number of device clusters according to the geographical location of mobile devices and then deploys service instances onto the edge cloud servers nearest to the centers of device clusters. Our performance evaluation has shown that the proposed algorithm can effectively reduce not only the network latency of edge cloud services but also the number of service instances used for satisfying the condition of tolerable network latency.

  20. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin

    2015-02-03

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  1. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin; Dutta, Rishabh; Jonsson, Sigurjon

    2015-01-01

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  2. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    International Nuclear Information System (INIS)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B

    2006-01-01

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily

  3. Research on Single Base-Station Distance Estimation Algorithm in Quasi-GPS Ultrasonic Location System

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, X C; Su, S J; Wang, Y K; Du, J B [Instrument Department, College of Mechatronics Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)

    2006-10-15

    In order to identify each base-station in quasi-GPS ultrasonic location system, a unique pseudo-random code is assigned to each base-station. This article primarily studies the distance estimation problem between Autonomous Guide Vehicle (AGV) and single base-station, and then the ultrasonic spread-spectrum distance measurement Time Delay Estimation (TDE) model is established. Based on the above model, the envelope correlation fast TDE algorithm based on FFT is presented and analyzed. It shows by experiments that when the m sequence used in the received signal is as same as the reference signal, there will be a sharp correlation value in their envelope correlation function after they are processed by the above algorithm; otherwise, the will be no prominent correlation value. So, the AGV can identify each base-station easily.

  4. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    Science.gov (United States)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  5. A set of particle locating algorithms not requiring face belonging to cell connectivity data

    Science.gov (United States)

    Sani, M.; Saidi, M. S.

    2009-10-01

    Existing efficient directed particle locating (host determination) algorithms rely on the face belonging to cell relationship (F2C) to find the next cell on the search path and the cell in which the target is located. Recently, finite volume methods have been devised which do not need F2C. Therefore, existing search algorithms are not directly applicable (unless F2C is included). F2C is a major memory burden in grid description. If the memory benefit from these finite volume methods are desirable new search algorithms should be devised. In this work two new algorithms (line of sight and closest cell) are proposed which do not need F2C. They are based on the structure of the sparse coefficient matrix involved (stored for example in the compressed row storage, CRS, format) to determine the next cell. Since F2C is not available, testing a cell for the presence of the target is not possible. Therefore, the proposed methods may wrongly mark a nearby cell as the host in some rare cases. The issue of importance of finding the correct host cell (not wrongly hitting its neighbor) is addressed. Quantitative measures are introduced to assess the efficiency of the methods and comparison is made for typical grid types used in computational fluid dynamics. In comparison, the closest cell method, having a lower computational cost than the family of line of sight and the existing efficient maximum dot product methods, gives a very good performance with tolerable and harmless wrong hits. If more accuracy is needed, the method of approximate line of sight then closest cell (LS-A-CC) is recommended.

  6. Geophysical and isotopic mapping of preexisting crustal structures that influenced the location and development of the San Jacinto fault zone, southern California

    Science.gov (United States)

    Langenheim, V.E.; Jachens, R.C.; Morton, D.M.; Kistler, R.W.; Matti, J.C.

    2004-01-01

    We examine the role of preexisting crustal structure within the Peninsular Ranges batholith on determining the location of the San Jacinto fault zone by analysis of geophysical anomalies and initial strontium ratio data. A 1000-km-long boundary within the Peninsular Ranges batholith, separating relatively mafic, dense, and magnetic rocks of the western Peninsular Ranges batholith from the more felsic, less dense, and weakly magnetic rocks of the eastern Peninsular Ranges batholith, strikes north-northwest toward the San Jacinto fault zone. Modeling of the gravity and magnetic field anomalies caused by this boundary indicates that it extends to depths of at least 20 km. The anomalies do not cross the San Jacinto fault zone, but instead trend northwesterly and coincide with the fault zone. A 75-km-long gradient in initial strontium ratios (Sri) in the eastern Peninsular Ranges batholith coincides with the San Jacinto fault zone. Here rocks east of the fault are characterized by Sri greater than 0.706, indicating a source of largely continental crust, sedimentary materials, or different lithosphere. We argue that the physical property contrast produced by the Peninsular Ranges batholith boundary provided a mechanically favorable path for the San Jacinto fault zone, bypassing the San Gorgonio structural knot as slip was transferred from the San Andreas fault 1.0-1.5 Ma. Two historical M6.7 earthquakes may have nucleated along the Peninsular Ranges batholith discontinuity in San Jacinto Valley, suggesting that Peninsular Ranges batholith crustal structure may continue to affect how strain is accommodated along the San Jacinto fault zone. ?? 2004 Geological Society of America.

  7. A rapid place name locating algorithm based on ontology qualitative retrieval, ranking and recommendation

    Science.gov (United States)

    Fan, Hong; Zhu, Anfeng; Zhang, Weixia

    2015-12-01

    In order to meet the rapid positioning of 12315 complaints, aiming at the natural language expression of telephone complaints, a semantic retrieval framework is proposed which is based on natural language parsing and geographical names ontology reasoning. Among them, a search result ranking and recommended algorithms is proposed which is regarding both geo-name conceptual similarity and spatial geometry relation similarity. The experiments show that this method can assist the operator to quickly find location of 12,315 complaints, increased industry and commerce customer satisfaction.

  8. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    Science.gov (United States)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  9. Genetic algorithm based on optimization of neural network structure for fault diagnosis of the clutch retainer mechanism of MF 285 tractor

    Directory of Open Access Journals (Sweden)

    S. F Mousavi

    2016-09-01

    Full Text Available Introduction The diagnosis of agricultural machinery faults must be performed at an opportune time, in order to fulfill the agricultural operations in a timely manner and to optimize the accuracy and the integrity of a system, proper monitoring and fault diagnosis of the rotating parts is required. With development of fault diagnosis methods of rotating equipment, especially bearing failure, the security, performance and availability of machines has been increasing. In general, fault detection is conducted through a specific procedure which starts with data acquisition and continues with features extraction, and subsequently failure of the machine would be detected. Several practical methods have been introduced for fault detection in rotating parts of machineries. The review of the literature shows that both Artificial Neural Networks (ANN and Support Vector Machines (SVM have been used for this purpose. However, the results show that SVM is more effective than Artificial Neural Networks in fault detection of such machineries. In some smart detection systems, incorporating an optimized method such as Genetic Algorithm in the Neural Network model, could improve the fault detection procedure. Consequently, the fault detection performance of neural networks may also be improved by combining with the Genetic Algorithm and hence will be comparable with the performance of the Support Vector Machine. In this study, the so called Genetic Algorithm (GA method was used to optimize the structure of the Artificial Neural Networks (ANN for fault detection of the clutch retainer mechanism of Massey Ferguson 285 tractor. Materials and Methods The test rig consists of some electro mechanical parts including the clutch retainer mechanism of Massey Ferguson 285 tractor, a supporting shaft, a single-phase electric motor, a loading mechanism to model the load of the tractor clutch and the corresponding power train gears. The data acquisition section consists of a

  10. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  11. Algorithm for locating the extremum of a multi-dimensional constrained function and its application to the PPPL Hybrid Study

    International Nuclear Information System (INIS)

    Bathke, C.

    1978-03-01

    A description is presented of a general algorithm for locating the extremum of a multi-dimensional constrained function. The algorithm employs a series of techniques dominated by random shrinkage, steepest descent, and adaptive creeping. A discussion follows of the algorithm's application to a ''real world'' problem, namely the optimization of the price of electricity, P/sub eh/, from a hybrid fusion-fission reactor. Upon the basis of comparisons with other optimization schemes of a survey nature, the algorithm is concluded to yield a good approximation to the location of a function's optimum

  12. An Immune Cooperative Particle Swarm Optimization Algorithm for Fault-Tolerant Routing Optimization in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yifan Hu

    2012-01-01

    Full Text Available The fault-tolerant routing problem is important consideration in the design of heterogeneous wireless sensor networks (H-WSNs applications, and has recently been attracting growing research interests. In order to maintain k disjoint communication paths from source sensors to the macronodes, we present a hybrid routing scheme and model, in which multiple paths are calculated and maintained in advance, and alternate paths are created once the previous routing is broken. Then, we propose an immune cooperative particle swarm optimization algorithm (ICPSOA in the model to provide the fast routing recovery and reconstruct the network topology for path failure in H-WSNs. In the ICPSOA, mutation direction of the particle is determined by multi-swarm evolution equation, and its diversity is improved by immune mechanism, which can enhance the capacity of global search and improve the converging rate of the algorithm. Then we validate this theoretical model with simulation results. The results indicate that the ICPSOA-based fault-tolerant routing protocol outperforms several other protocols due to its capability of fast routing recovery mechanism, reliable communications, and prolonging the lifetime of WSNs.

  13. SmartFix: Indoor Locating Optimization Algorithm for Energy-Constrained Wearable Devices

    Directory of Open Access Journals (Sweden)

    Xiaoliang Wang

    2017-01-01

    Full Text Available Indoor localization technology based on Wi-Fi has long been a hot research topic in the past decade. Despite numerous solutions, new challenges have arisen along with the trend of smart home and wearable computing. For example, power efficiency needs to be significantly improved for resource-constrained wearable devices, such as smart watch and wristband. For a Wi-Fi-based locating system, most of the energy consumption can be attributed to real-time radio scan; however, simply reducing radio data collection will cause a serious loss of locating accuracy because of unstable Wi-Fi signals. In this paper, we present SmartFix, an optimization algorithm for indoor locating based on Wi-Fi RSS. SmartFix utilizes user motion features, extracts characteristic value from history trajectory, and corrects deviation caused by unstable Wi-Fi signals. We implemented a prototype of SmartFix both on Moto 360 2nd-generation Smartwatch and on HTC One Smartphone. We conducted experiments both in a large open area and in an office hall. Experiment results demonstrate that average locating error is less than 2 meters for more than 80% cases, and energy consumption is only 30% of Wi-Fi fingerprinting method under the same experiment circumstances.

  14. An algorithm for leak locating using coupled vibration of pipe-water

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin

    2004-01-01

    Leak noise is a good source to identify the exact location of a leak point of underground water pipelines. Water leak generates broadband sound from a leak location and this sound propagation due to leak in water pipelines is not a non-dispersive wave any more because of the surrounding pipes and soil. However, the necessity of long-range detection of this leak location makes to identify low-frequency acoustic waves rather than high frequency ones. Acoustic wave propagation coupled with surrounding boundaries including cast iron pipes is theoretically analyzed and the wave velocity was confirmed with experiment. The leak locations were identified both by the Acoustic Emission (AE) method and the cross-correlation method. In a short-range distance, both the AE method and cross-correlation method are effective to detect leak position. However, the detection for a long-range distance required a lower frequency range accelerometers only because higher frequency waves were attenuated very quickly with the increase of propagation paths. Two algorithms for the cross-correlation function were suggested, and a long-range detection has been achieved at real underground water pipelines longer than 300m

  15. LEA: An Algorithm to Estimate the Level of Location Exposure in Infrastructure-Based Wireless Networks

    Directory of Open Access Journals (Sweden)

    Francisco Garcia

    2017-01-01

    Full Text Available Location privacy in wireless networks is nowadays a major concern. This is due to the fact that the mere fact of transmitting may allow a network to pinpoint a mobile node. We consider that a first step to protect a mobile node in this situation is to provide it with the means to quantify how accurately a network establishes its position. To achieve this end, we introduce the location-exposure algorithm (LEA, which runs on the mobile terminal only and whose operation consists of two steps. In the first step, LEA discovers the positions of nearby network nodes and uses this information to emulate how they estimate the position of the mobile node. In the second step, it quantifies the level of exposure by computing the distance between the position estimated in the first step and its true position. We refer to these steps as a location-exposure problem. We tested our proposal with simulations and testbed experiments. These results show the ability of LEA to reproduce the location of the mobile node, as seen by the network, and to quantify the level of exposure. This knowledge can help the mobile user decide which actions should be performed before transmitting.

  16. Using Metaheuristic Algorithms for Solving a Hub Location Problem: Application in Passive Optical Network Planning

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2017-02-01

    Full Text Available Nowadays, fiber-optic due to having greater bandwidth and being more efficient compared with other similar technologies, are counted as one the most important tools for data transfer. In this article, an integrated mathematical model for a three-level fiber-optic distribution network with consideration of simultaneous backbone and local access networks is presented in which the backbone network is a ring and the access networks has a star-star topology. The aim of the model is to determine the location of the central offices and splitters, how connections are made between central offices, and allocation of each demand node to a splitter or central office in a way that the wiring cost of fiber optical and concentrator installation are minimized. Moreover, each user’s desired bandwidth should be provided efficiently. Then, the proposed model is validated by GAMS software in small-sized problems, afterwards the model is solved by two meta-heuristic methods including differential evolution (DE and genetic algorithm (GA in large-scaled problems and the results of two algorithms are compared with respect to computational time and objective function obtained value. Finally, a sensitivity analysis is provided. Keyword: Fiber-optic, telecommunication network, hub-location, passive splitter, three-level network.

  17. Locating hazardous gas leaks in the atmosphere via modified genetic, MCMC and particle swarm optimization algorithms

    Science.gov (United States)

    Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming

    2017-05-01

    Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.

  18. Methodology for locating faults in the Eastern distribution system PDVSA, Punta de Mata and Furrial Divisions; Metodologia para la localizacion de fallas en el sistema de distribucion de PDVSA Oriente, Divisiones Punta de Mata y Furrial

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, F [Universidad Nacional Experimental Politecnica, Antonio Jose de Sucre, Guayana, Bolivar (Venezuela)]. E-mail: fco_martinez@outlook.com; Vasquez, C [Petroleos de Venezuela S.A., Maturin, Monagas (Venezuela)]. E-mail: vasquezcp@pdvsa.com

    2013-03-15

    Fault location in distribution systems has received a lot of attention in recent years in order to increase the availability of electricity supply. Due to the characteristics of distribution networks, fault location is a complicated task, so methods have been developed based on the variation of current and voltage values measured at the source substation, in normal operating condition and under the occurrence of short circuits. This article presents the implementation in MATLAB of a fault location algorithm applied to distribution systems, based on graphical analysis of the fault reactance which is determined by the minimum value of the reactance, using serial impedance matrix and fault/prefault voltage and current metering. Developed Tool Accuracy was verified by comparing the results obtained through it with actual recorded event data (Multilin SR 760) and distance to a known failure point. Additionally the method was applied to an experimental case that was compared with network fault simulation using ETAP Software. For both evaluated cases, the absolute error did not exceed 7%. [Spanish] La localizacion de fallas en sistemas de distribucion ha recibido atencion en los ultimos anos con el fin de aumentar la disponibilidad del suministro de energia electrica. Debido a las caracteristicas propias de las redes de distribucion, la ubicacion de fallas resulta una tarea complicada, por lo que se han desarrollado metodos basados en la variacion de los valores de corriente y voltaje medidos en la subestacion fuente, en condicion normal de operacion y ante la ocurrencia de cortocircuitos. Este articulo presenta la implementacion en MATLAB de un algoritmo de localizacion de fallas en sistemas de distribucion que se fundamenta en el analisis grafico de la reactancia de falla, mediante el cual se determina el minimo valor de la reactancia, utilizando la matriz de impedancia serie y la medicion de los voltajes y corrientes de prefalla y falla. Se verifico la precision de la

  19. A novel algorithm for discrimination between inrush current and internal faults in power transformer differential protection based on discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Eldin, A.A. Hossam; Refaey, M.A. [Electrical Engineering Department, Alexandria University, Alexandria (Egypt)

    2011-01-15

    This paper proposes a novel methodology for transformer differential protection, based on wave shape recognition of the discriminating criterion extracted of the instantaneous differential currents. Discrete wavelet transform has been applied to the differential currents due to internal fault and inrush currents. The diagnosis criterion is based on median absolute deviation (MAD) of wavelet coefficients over a specified frequency band. The proposed algorithm is examined using various simulated inrush and internal fault current cases on a power transformer that has been modeled using electromagnetic transients program EMTDC software. Results of evaluation study show that, proposed wavelet based differential protection scheme can discriminate internal faults from inrush currents. (author)

  20. Fault Identification Algorithm Based on Zone-Division Wide Area Protection System

    OpenAIRE

    Xiaojun Liu; Youcheng Wang; Hub Hu

    2014-01-01

    As the power grid becomes more magnified and complicated, wide-area protection system in the practical engineering application is more and more restricted by the communication level. Based on the concept of limitedness of wide-area protection system, the grid with complex structure is divided orderly in this paper, and fault identification and protection action are executed in each divided zone to reduce the pressure of the communication system. In protection zone, a new wide-area...

  1. A novel vibration-based fault diagnostic algorithm for gearboxes under speed fluctuations without rotational speed measurement

    Science.gov (United States)

    Hong, Liu; Qu, Yongzhi; Dhupia, Jaspreet Singh; Sheng, Shuangwen; Tan, Yuegang; Zhou, Zude

    2017-09-01

    The localized failures of gears introduce cyclic-transient impulses in the measured gearbox vibration signals. These impulses are usually identified from the sidebands around gear-mesh harmonics through the spectral analysis of cyclo-stationary signals. However, in practice, several high-powered applications of gearboxes like wind turbines are intrinsically characterized by nonstationary processes that blur the measured vibration spectra of a gearbox and deteriorate the efficacy of spectral diagnostic methods. Although order-tracking techniques have been proposed to improve the performance of spectral diagnosis for nonstationary signals measured in such applications, the required hardware for the measurement of rotational speed of these machines is often unavailable in industrial settings. Moreover, existing tacho-less order-tracking approaches are usually limited by the high time-frequency resolution requirement, which is a prerequisite for the precise estimation of the instantaneous frequency. To address such issues, a novel fault-signature enhancement algorithm is proposed that can alleviate the spectral smearing without the need of rotational speed measurement. This proposed tacho-less diagnostic technique resamples the measured acceleration signal of the gearbox based on the optimal warping path evaluated from the fast dynamic time-warping algorithm, which aligns a filtered shaft rotational harmonic signal with respect to a reference signal assuming a constant shaft rotational speed estimated from the approximation of operational speed. The effectiveness of this method is validated using both simulated signals from a fixed-axis gear pair under nonstationary conditions and experimental measurements from a 750-kW planetary wind turbine gearbox on a dynamometer test rig. The results demonstrate that the proposed algorithm can identify fault information from typical gearbox vibration measurements carried out in a resource-constrained industrial environment.

  2. An Improved Genetic Algorithm for Optimal Stationary Energy Storage System Locating and Sizing

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-10-01

    Full Text Available The application of a stationary ultra-capacitor energy storage system (ESS in urban rail transit allows for the recuperation of vehicle braking energy for increasing energy savings as well as for a better vehicle voltage profile. This paper aims to obtain the best energy savings and voltage profile by optimizing the location and size of ultra-capacitors. This paper firstly raises the optimization objective functions from the perspectives of energy savings, regenerative braking cancellation and installation cost, respectively. Then, proper mathematical models of the DC (direct current traction power supply system are established to simulate the electrical load-flow of the traction supply network, and the optimization objections are evaluated in the example of a Chinese metro line. Ultimately, a methodology for optimal ultra-capacitor energy storage system locating and sizing is put forward based on the improved genetic algorithm. The optimized result shows that certain preferable and compromised schemes of ESSs’ location and size can be obtained, acting as a compromise between satisfying better energy savings, voltage profile and lower installation cost.

  3. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  4. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    Science.gov (United States)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  5. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information.

    Science.gov (United States)

    Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

    2010-06-18

    Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to

  6. A heuristic algorithm for a multi-product four-layer capacitated location-routing problem

    Directory of Open Access Journals (Sweden)

    Mohsen Hamidi

    2014-01-01

    Full Text Available The purpose of this study is to solve a complex multi-product four-layer capacitated location-routing problem (LRP in which two specific constraints are taken into account: 1 plants have limited production capacity, and 2 central depots have limited capacity for storing and transshipping products. The LRP represents a multi-product four-layer distribution network that consists of plants, central depots, regional depots, and customers. A heuristic algorithm is developed to solve the four-layer LRP. The heuristic uses GRASP (Greedy Randomized Adaptive Search Procedure and two probabilistic tabu search strategies of intensification and diversification to tackle the problem. Results show that the heuristic solves the problem effectively.

  7. A NEW HYBRID YIN-YANG-PAIR-PARTICLE SWARM OPTIMIZATION ALGORITHM FOR UNCAPACITATED WAREHOUSE LOCATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A. A. Heidari

    2017-09-01

    Full Text Available Yin-Yang-pair optimization (YYPO is one of the latest metaheuristic algorithms (MA proposed in 2015 that tries to inspire the philosophy of balance between conflicting concepts. Particle swarm optimizer (PSO is one of the first population-based MA inspired by social behaviors of birds. In spite of PSO, the YYPO is not a nature inspired optimizer. It has a low complexity and starts with only two initial positions and can produce more points with regard to the dimension of target problem. Due to unique advantages of these methodologies and to mitigate the immature convergence and local optima (LO stagnation problems in PSO, in this work, a continuous hybrid strategy based on the behaviors of PSO and YYPO is proposed to attain the suboptimal solutions of uncapacitated warehouse location (UWL problems. This efficient hierarchical PSO-based optimizer (PSOYPO can improve the effectiveness of PSO on spatial optimization tasks such as the family of UWL problems. The performance of the proposed PSOYPO is verified according to some UWL benchmark cases. These test cases have been used in several works to evaluate the efficacy of different MA. Then, the PSOYPO is compared to the standard PSO, genetic algorithm (GA, harmony search (HS, modified HS (OBCHS, and evolutionary simulated annealing (ESA. The experimental results demonstrate that the PSOYPO can reveal a better or competitive efficacy compared to the PSO and other MA.

  8. a New Hybrid Yin-Yang Swarm Optimization Algorithm for Uncapacitated Warehouse Location Problems

    Science.gov (United States)

    Heidari, A. A.; Kazemizade, O.; Hakimpour, F.

    2017-09-01

    Yin-Yang-pair optimization (YYPO) is one of the latest metaheuristic algorithms (MA) proposed in 2015 that tries to inspire the philosophy of balance between conflicting concepts. Particle swarm optimizer (PSO) is one of the first population-based MA inspired by social behaviors of birds. In spite of PSO, the YYPO is not a nature inspired optimizer. It has a low complexity and starts with only two initial positions and can produce more points with regard to the dimension of target problem. Due to unique advantages of these methodologies and to mitigate the immature convergence and local optima (LO) stagnation problems in PSO, in this work, a continuous hybrid strategy based on the behaviors of PSO and YYPO is proposed to attain the suboptimal solutions of uncapacitated warehouse location (UWL) problems. This efficient hierarchical PSO-based optimizer (PSOYPO) can improve the effectiveness of PSO on spatial optimization tasks such as the family of UWL problems. The performance of the proposed PSOYPO is verified according to some UWL benchmark cases. These test cases have been used in several works to evaluate the efficacy of different MA. Then, the PSOYPO is compared to the standard PSO, genetic algorithm (GA), harmony search (HS), modified HS (OBCHS), and evolutionary simulated annealing (ESA). The experimental results demonstrate that the PSOYPO can reveal a better or competitive efficacy compared to the PSO and other MA.

  9. A novel wavelet-based feature extraction from common mode currents for fault location in a residential DC microgrid

    DEFF Research Database (Denmark)

    Beheshtaein, Siavash; Yu, Junyang; Cuzner, Rob

    2017-01-01

    approaches have been developed that enable construction of scalable microgrids based on PV and battery storage. However, as these systems proliferate, it will be necessary to develop safe and reliable methods for fault protection. Ground faults are of specific concern because, in many cases, cables...... are buried underground. At the same time, microgrids include current monitoring and processing capability wherever an energy resource interfaces to the microgrid through a power electronic converter. This paper discusses methods for identifying ground fault behavior within standard DC microgrid structures...

  10. Efficient algorithms to assess component and gate importance in fault tree analysis

    International Nuclear Information System (INIS)

    Dutuit, Y.; Rauzy, A.

    2001-01-01

    One of the principal activities of risk assessment is either the ranking or the categorization of structures, systems and components with respect to their risk-significance or their safety-significance. Several measures, so-called importance factors, of such a significance have been proposed for the case where the support model is a fault tree. In this article, we show how binary decision diagrams can be use to assess efficiently a number of classical importance factors. This work completes the preliminary results obtained recently by Andrews and Sinnamon, and the authors. It deals also with the concept of joint reliability importance

  11. Model-based fault diagnosis techniques design schemes, algorithms, and tools

    CERN Document Server

    Ding, Steven

    2008-01-01

    The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.

  12. Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks

    National Research Council Canada - National Science Library

    Sorgaard, Duane

    2004-01-01

    To determine the location of a computer on the Internet without resorting to outside information or databases would greatly increase the security abilities of the US Air Force and the Department of Defense...

  13. An Efficient Genetic Algorithm to Solve the Intermodal Terminal Location problem

    Directory of Open Access Journals (Sweden)

    Mustapha Oudani

    2014-11-01

    Full Text Available The exponential growth of the flow of goods and passengers, fragility of certain products and the need for the optimization of transport costs impose on carriers to use more and more multimodal transport. In addition, the need for intermodal transport policy has been strongly driven by environmental concerns and to benefit from the combination of different modes of transport to cope with the increased economic competition. This research is mainly concerned with the Intermodal Terminal Location Problem introduced recently in scientific literature which consists to determine a set of potential sites to open and how to route requests to a set of customers through the network while minimizing the total cost of transportation. We begin by presenting a description of the problem. Then, we present a mathematical formulation of the problem and discuss the sense of its constraints. The objective function to minimize is the sum of road costs and railroad combined transportation costs. As the Intermodal Terminal Location Problemproblem is NP-hard, we propose an efficient real coded genetic algorithm for solving the problem. Our solutions are compared to CPLEX and also to the heuristics reported in the literature. Numerical results show that our approach outperforms the other approaches.

  14. WAVELET-BASED ALGORITHM FOR DETECTION OF BEARING FAULTS IN A GAS TURBINE ENGINE

    Directory of Open Access Journals (Sweden)

    Sergiy Enchev

    2014-07-01

    Full Text Available Presented is a gas turbine engine bearing diagnostic system that integrates information from various advanced vibration analysis techniques to achieve robust bearing health state awareness. This paper presents a computational algorithm for identifying power frequency variations and integer harmonics by using wavelet-based transform. The continuous wavelet transform with  the complex Morlet wavelet is adopted to detect the harmonics presented in a power signal. The algorithm based on the discrete stationary wavelet transform is adopted to denoise the wavelet ridges.

  15. Active and passive faults detection by using the PageRank algorithm

    Science.gov (United States)

    Darooneh, Amir H.; Lotfi, Nastaran

    2014-08-01

    Here we try to find active and passive places for earthquakes in the geographical region of Iran. The approach of Abe and Suzuki is adopted for modeling the seismic history of Iran by a complex directed network. By using the PageRank algorithm, we assign to any places in the region an activity index. Then, we determine the most active and passive places.

  16. Design & Evaluation of a Protection Algorithm for a Wind Turbine Generator based on the fault-generated Symmetrical Components

    DEFF Research Database (Denmark)

    Zheng, T. Y.; Cha, Seung-Tae; Lee, B. E.

    2011-01-01

    A protection relay for a wind turbine generator (WTG) based on the fault-generated symmetrical components is proposed in the paper. At stage 1, the relay uses the magnitude of the positive-sequence component in the fault current to distinguish faults on a parallel WTG, connected to the same feeder......, or on an adjacent feeder from those on the connected feeder, on the collection bus, at an inter-tie or at a grid. For the former faults, the relay should remain stable and inoperative whilst the instantaneous or delayed tripping is required for the latter faults. At stage 2, the fault type is first evaluated using...... the relationships of the fault-generated symmetrical components. Then, the magnitude of the positive-sequence component in the fault current is used again to decide on either instantaneous or delayed operation. The operating performance of the relay is then verified using various fault scenarios modelled using...

  17. High-resolution fault image from accurate locations and focal mechanisms of the 2008 swarm earthquakes in West Bohemia, Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Vavryčuk, Václav; Bouchaala, Fateh; Fischer, Tomáš

    2013-01-01

    Roč. 590, April (2013), s. 189-195 ISSN 0040-1951 R&D Projects: GA ČR(CZ) GAP210/12/1491; GA MŠk LM2010008 EU Projects: European Commission(XE) 230669 - AIM Institutional support: RVO:67985530 Keywords : earth quake location * failure criterion * fault friction * focal mechanism * tectonic stress Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.866, year: 2013

  18. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    Science.gov (United States)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  19. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    Science.gov (United States)

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  20. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Luis E. Garza-Castañón

    2013-11-01

    Full Text Available This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs. The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  1. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Science.gov (United States)

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  2. Detection of arcing ground fault location on a distribution network connected PV system; Hikarihatsuden renkei haidensen ni okeru koko chiryaku kukan no kenshutsuho

    Energy Technology Data Exchange (ETDEWEB)

    Sato, M; Iwaya, K; Morooka, Y [Hachinohe Institute of Technology, Aomori (Japan)

    1996-10-27

    In the near future, it is supposed that a great number of small-scale distributed power sources, such as photovoltaic power generation for general houses, will be interconnected with the ungrounded neutral distribution system in Japan. When ground fault of commercial frequency once occurs, great damage is easily guessed. This paper discusses the effect of the ground fault on the ground phase current using a 6.6 kV high-voltage model system by considering the non-linear self-inductance in the line, and by considering the non-linear relation of arcing ground fault current frequency. In the present method, the remarkable difference of series resonance frequency determined by the inductance and earth capacity between the source side and load side is utilized for the detection of high-voltage arcing ground fault location. In this method, there are some cases in which the non-linear effect obtained by measuring the inductance of sound phase including the secondary winding of transformer can not be neglected. Especially, for the actual high-voltage system, it was shown that the frequency characteristics of transformer inductance for distribution should be theoretically derived in the frequency range between 2 kHz and 6 kHz. 2 refs., 5 figs., 1 tab.

  3. Location, Allocation and Routing of Temporary Health Centers in Rural Areas in Crisis, Solved by Improved Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Mahdi Alinaghian

    2017-01-01

    Full Text Available In this paper, an uncertain integrated model for simultaneously locating temporary health centers in the affected areas, allocating affected areas to these centers, and routing to transport their required good is considered. Health centers can be settled in one of the affected areas or in a place out of them; therefore, the proposed model offers the best relief operation policy when it is possible to supply the goods of affected areas (which are customers of goods directly or under coverage. Due to that the problem is NP-Hard, to solve the problem in large-scale, a meta-heuristic algorithm based on harmony search algorithm is presented and its performance has been compared with basic harmony search algorithm and neighborhood search algorithm in small and large scale test problems. The results show that the proposed harmony search algorithm has a suitable efficiency.

  4. From Pixels to Region: A Salient Region Detection Algorithm for Location-Quantification Image

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available Image saliency detection has become increasingly important with the development of intelligent identification and machine vision technology. This process is essential for many image processing algorithms such as image retrieval, image segmentation, image recognition, and adaptive image compression. We propose a salient region detection algorithm for full-resolution images. This algorithm analyzes the randomness and correlation of image pixels and pixel-to-region saliency computation mechanism. The algorithm first obtains points with more saliency probability by using the improved smallest univalue segment assimilating nucleus operator. It then reconstructs the entire saliency region detection by taking these points as reference and combining them with image spatial color distribution, as well as regional and global contrasts. The results for subjective and objective image saliency detection show that the proposed algorithm exhibits outstanding performance in terms of technology indices such as precision and recall rates.

  5. Optimal Placement and Sizing of Fault Current Limiters in Distributed Generation Systems Using a Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    N. Bayati

    2017-02-01

    Full Text Available Distributed Generation (DG connection in a power system tends to increase the short circuit level in the entire system which, in turn, could eliminate the protection coordination between the existing relays. Fault Current Limiters (FCLs are often used to reduce the short-circuit level of the network to a desirable level, provided that they are dully placed and appropriately sized. In this paper, a method is proposed for optimal placement of FCLs and optimal determination of their impedance values by which the relay operation time, the number and size of the FCL are minimized while maintaining the relay coordination before and after DG connection. The proposed method adopts the removal of low-impact FCLs and uses a hybrid Genetic Algorithm (GA optimization scheme to determine the optimal placement of FCLs and the values of their impedances. The suitability of the proposed method is demonstrated by examining the results of relay coordination in a typical DG network before and after DG connection.

  6. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    Science.gov (United States)

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  7. HTCRL: A Range-Free Location Algorithm Based on Homothetic Triangle Cyclic Refinement in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dan Zhang

    2017-03-01

    Full Text Available Wireless sensor networks (WSN have become a significant technology in recent years. They can be widely used in many applications. WSNs consist of a large number of sensor nodes and each of them is energy-constrained and low-power dissipation. Most of the sensor nodes are tiny sensors with small memories and do not acquire their own locations. This means determining the locations of the unknown sensor nodes is one of the key issues in WSN. In this paper, an improved APIT algorithm HTCRL (Homothetic Triangle Cyclic Refinement Location is proposed, which is based on the principle of the homothetic triangle. It adopts perpendicular median surface cutting to narrow down target area in order to decrease the average localization error rate. It reduces the probability of misjudgment by adding the conditions of judgment. It can get a relatively high accuracy compared with the typical APIT algorithm without any additional hardware equipment or increasing the communication overhead.

  8. An improved contour symmetry axes extraction algorithm and its application in the location of picking points of apples

    Energy Technology Data Exchange (ETDEWEB)

    Wang, D.; Song, H.; Yu, X.; Zhang, W.; Qu, W.; Xu, Y.

    2015-07-01

    The key problem for picking robots is to locate the picking points of fruit. A method based on the moment of inertia and symmetry of apples is proposed in this paper to locate the picking points of apples. Image pre-processing procedures, which are crucial to improving the accuracy of the location, were carried out to remove noise and smooth the edges of apples. The moment of inertia method has the disadvantage of high computational complexity, which should be solved, so convex hull was used to improve this problem. To verify the validity of this algorithm, a test was conducted using four types of apple images containing 107 apple targets. These images were single and unblocked apple images, single and blocked apple images, images containing adjacent apples, and apples in panoramas. The root mean square error values of these four types of apple images were 6.3, 15.0, 21.6 and 18.4, respectively, and the average location errors were 4.9°, 10.2°, 16.3° and 13.8°, respectively. Furthermore, the improved algorithm was effective in terms of average runtime, with 3.7 ms and 9.2 ms for single and unblocked and single and blocked apple images, respectively. For the other two types of apple images, the runtime was determined by the number of apples and blocked apples contained in the images. The results showed that the improved algorithm could extract symmetry axes and locate the picking points of apples more efficiently. In conclusion, the improved algorithm is feasible for extracting symmetry axes and locating the picking points of apples. (Author)

  9. Methodology for the location diagnosis of electrical faults in electric power systems; Metodologia para el diagnostico de ubicacion de fallas en sistema electricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Rosas Molina, Ricardo

    2008-08-15

    The constant growth of the Electric Power Systems derived from the increase in the world-wide demand of energy, has brought as a consequence a greater complexity in the operation and control of the power nets. One of the most affected tasks by this situation is the operation of electrical systems against the presence of faults, where the first task to realize is, on the part of the operational personnel of the network, the rapid fault site location within the system. In the present paper the problem of the diagnose location of electrical faults in power systems is approached, from the point of view of the operators of the energy control centers of an electric company. The objective of this thesis work is to describe a methodology of operational analysis of protections, as a bases for the development of a system of diagnosis systems for faults location, that allows to consider the possible fault sites within the system as well as a justification of the operation of protections in face of a disturbance as a support to the operators of the Energy Control centers. The methodology is designed to use different information types, discreet, continuous and controls. Nevertheless, in the development of the present stage of the proposed methodology use is made exclusively of the discreet information of the conditions of breakers and operation of relays, as well as of the connectivity of the network elements. The analysis methodology consists in determining the network elements where the fault could have occurred, using the protections coverage areas associated to the operated circuit breakers. Later, these fault alternatives become ordained in descendent form of possibility using classification indexes and analyses based on fuzzy logic. [Spanish] El constante crecimiento de los Sistemas Electricos de Potencia derivado del incremento en la demanda energetica mundial, ha traido como consecuencia una mayor complejidad en la operacion y control de las redes electricas. Una de las

  10. Improved Tensor-Based Singular Spectrum Analysis Based on Single Channel Blind Source Separation Algorithm and Its Application to Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Dan Yang

    2017-04-01

    Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.

  11. Iowa Bedrock Faults

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This fault coverage locates and identifies all currently known/interpreted fault zones in Iowa, that demonstrate offset of geologic units in exposure or subsurface...

  12. Location and distribution management of relief centers: a genetic algorithm approach

    NARCIS (Netherlands)

    Najafi, M.; Farahani, R.Z.; de Brito, M.; Dullaert, W.E.H.

    2015-01-01

    Humanitarian logistics is regarded as a key area for improved disaster management efficiency and effectiveness. In this study, a multi-objective integrated logistic model is proposed to locate disaster relief centers while taking into account network costs and responsiveness. Because this location

  13. Novel Navigation Algorithm for Wireless Sensor Networks without Information of Locations

    NARCIS (Netherlands)

    Guo, Peng; Jiang, Tao; Yi, Youwen; Zhang, Qian; Zhang, Kui

    2011-01-01

    In this paper, we propose a novel algorithm of distributed navigation for people to escape from critical event region in wireless sensor networks (WSNs). Unlike existing works, the scenario discussed in the paper has no goal or exit as guidance, leading to a big challenge for the navigation problem.

  14. Location Assisted Vertical Handover Algorithm for QoS Optimization in End-to-End Connections

    DEFF Research Database (Denmark)

    Dam, Martin S.; Christensen, Steffen R.; Mikkelsen, Lars M.

    2012-01-01

    implementation on Android based tablets. The simulations cover a wide range of scenarios for two mobile users in an urban area with ubiquitous cellular coverage, and shows our algorithm leads to increased throughput, with fewer handovers, when considering the end-to-end connection than to other handover schemes...

  15. A Novel Hierarchical Model to Locate Health Care Facilities with Fuzzy Demand Solved by Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Mehdi Alinaghian

    2014-08-01

    Full Text Available In the field of health losses resulting from failure to establish the facilities in a suitable location and the required number, beyond the cost and quality of service will result in an increase in mortality and the spread of diseases. So the facility location models have special importance in this area. In this paper, a successively inclusive hierarchical model for location of health centers in term of the transfer of patients from a lower level to a higher level of health centers has been developed. Since determination the exact number of demand for health care in the future is difficult and in order to make the model close to the real conditions of demand uncertainty, a fuzzy programming model based on credibility theory is considered. To evaluate the proposed model, several numerical examples are solved in small size. In order to solve large scale problems, a meta-heuristic algorithm based on harmony search algorithm was developed in conjunction with the GAMS software which indicants the performance of the proposed algorithm.

  16. Faults Diagnostics of Railway Axle Bearings Based on IMF’s Confidence Index Algorithm for Ensemble EMD

    Science.gov (United States)

    Yi, Cai; Lin, Jianhui; Zhang, Weihua; Ding, Jianming

    2015-01-01

    As train loads and travel speeds have increased over time, railway axle bearings have become critical elements which require more efficient non-destructive inspection and fault diagnostics methods. This paper presents a novel and adaptive procedure based on ensemble empirical mode decomposition (EEMD) and Hilbert marginal spectrum for multi-fault diagnostics of axle bearings. EEMD overcomes the limitations that often hypothesize about data and computational efforts that restrict the application of signal processing techniques. The outputs of this adaptive approach are the intrinsic mode functions that are treated with the Hilbert transform in order to obtain the Hilbert instantaneous frequency spectrum and marginal spectrum. Anyhow, not all the IMFs obtained by the decomposition should be considered into Hilbert marginal spectrum. The IMFs’ confidence index arithmetic proposed in this paper is fully autonomous, overcoming the major limit of selection by user with experience, and allows the development of on-line tools. The effectiveness of the improvement is proven by the successful diagnosis of an axle bearing with a single fault or multiple composite faults, e.g., outer ring fault, cage fault and pin roller fault. PMID:25970256

  17. Re-evaluation Of The Shallow Seismicity On Mt Etna Applying Probabilistic Earthquake Location Algorithms.

    Science.gov (United States)

    Tuve, T.; Mostaccio, A.; Langer, H. K.; di Grazia, G.

    2005-12-01

    A recent research project carried out together with the Italian Civil Protection concerns the study of amplitude decay laws in various areas on the Italian territory, including Mt Etna. A particular feature of seismic activity is the presence of moderate magnitude earthquakes causing frequently considerable damage in the epicentre areas. These earthquakes are supposed to occur at rather shallow depth, no more than 5 km. Given the geological context, however, these shallow earthquakes would origin in rather weak sedimentary material. In this study we check the reliability of standard earthquake location, in particular with respect to the calculated focal depth, using standard location methods as well as more advanced approaches such as the NONLINLOC software proposed by Lomax et al. (2000) using it with its various options (i.e., Grid Search, Metropolis-Gibbs and Oct-Tree) and 3D velocity model (Cocina et al., 2005). All three options of NONLINLOC gave comparable results with respect to hypocenter locations and quality. Compared to standard locations we note a significant improve of location quality and, in particular a considerable difference of focal depths (in the order of 1.5 - 2 km). However, we cannot find a clear bias towards greater or lower depth. Further analyses concern the assessment of the stability of locations. For this purpose we carry out various Monte Carlo experiments perturbing travel time reading randomly. Further investigations are devoted to possible biases which may arise from the use of an unsuitable velocity model.

  18. A branch-and-price algorithm for the capacitated facility location problem

    DEFF Research Database (Denmark)

    Klose, Andreas; Görtz, Simon

    2007-01-01

    to compute optimal solutions to large or difficult problem instances by means of a branch-and-bound procedure information about such a primal fractional solution can be advantageous. In this paper, a (stabilized) column generation method is, therefore, employed in order to solve a corresponding master...... problem exactly. The column generation procedure is then employed within a branch-and-price algorithm for computing optimal solutions to the CFLP. Computational results are reported for a set of larger and difficult problem instances....

  19. Locating Critical Circular and Unconstrained Failure Surface in Slope Stability Analysis with Tailored Genetic Algorithm

    Science.gov (United States)

    Pasik, Tomasz; van der Meij, Raymond

    2017-12-01

    This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.

  20. Solving a multi-objective location routing problem for infectious waste disposal using hybrid goal programming and hybrid genetic algorithm

    Directory of Open Access Journals (Sweden)

    Narong Wichapa

    2018-01-01

    Full Text Available Infectious waste disposal remains one of the most serious problems in the medical, social and environmental domains of almost every country. Selection of new suitable locations and finding the optimal set of transport routes for a fleet of vehicles to transport infectious waste material, location routing problem for infectious waste disposal, is one of the major problems in hazardous waste management. Determining locations for infectious waste disposal is a difficult and complex process, because it requires combining both intangible and tangible factors. Additionally, it depends on several criteria and various regulations. This facility location problem for infectious waste disposal is complicated, and it cannot be addressed using any stand-alone technique. Based on a case study, 107 hospitals and 6 candidate municipalities in Upper-Northeastern Thailand, we considered criteria such as infrastructure, geology and social & environmental criteria, evaluating global priority weights using the fuzzy analytical hierarchy process (Fuzzy AHP. After that, a new multi-objective facility location problem model which hybridizes fuzzy AHP and goal programming (GP, namely the HGP model, was tested. Finally, the vehicle routing problem (VRP for a case study was formulated, and it was tested using a hybrid genetic algorithm (HGA which hybridizes the push forward insertion heuristic (PFIH, genetic algorithm (GA and three local searches including 2-opt, insertion-move and interexchange-move. The results show that both the HGP and HGA can lead to select new suitable locations and to find the optimal set of transport routes for vehicles delivering infectious waste material. The novelty of the proposed methodologies, HGP, is the simultaneous combination of relevant factors that are difficult to interpret and cost factors in order to determine new suitable locations, and HGA can be applied to determine the transport routes which provide a minimum number of vehicles

  1. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  2. Neural network based automated algorithm to identify joint locations on hand/wrist radiographs for arthritis assessment

    International Nuclear Information System (INIS)

    Duryea, J.; Zaim, S.; Wolfe, F.

    2002-01-01

    Arthritis is a significant and costly healthcare problem that requires objective and quantifiable methods to evaluate its progression. Here we describe software that can automatically determine the locations of seven joints in the proximal hand and wrist that demonstrate arthritic changes. These are the five carpometacarpal (CMC1, CMC2, CMC3, CMC4, CMC5), radiocarpal (RC), and the scaphocapitate (SC) joints. The algorithm was based on an artificial neural network (ANN) that was trained using independent sets of digitized hand radiographs and manually identified joint locations. The algorithm used landmarks determined automatically by software developed in our previous work as starting points. Other than requiring user input of the location of nonanatomical structures and the orientation of the hand on the film, the procedure was fully automated. The software was tested on two datasets: 50 digitized hand radiographs from patients participating in a large clinical study, and 60 from subjects participating in arthritis research studies and who had mild to moderate rheumatoid arthritis (RA). It was evaluated by a comparison to joint locations determined by a trained radiologist using manual tracing. The success rate for determining the CMC, RC, and SC joints was 87%-99%, for normal hands and 81%-99% for RA hands. This is a first step in performing an automated computer-aided assessment of wrist joints for arthritis progression. The software provides landmarks that will be used by subsequent image processing routines to analyze each joint individually for structural changes such as erosions and joint space narrowing

  3. Algorithmic strategies for adapting to environmental changes in 802.11 location fingerprinting

    DEFF Research Database (Denmark)

    Hansen, Rene; Wind, Rico; Jensen, Christian S.

    2010-01-01

    Ubiquitous and accurate indoor positioning represents a key capability of an infrastructure that enables indoor location-based services. At the same time, such positioning has yet to be achieved. Much research uses commercial, off-the-shelf 802.11 (Wi-Fi) hardware for indoor positioning. In parti......Ubiquitous and accurate indoor positioning represents a key capability of an infrastructure that enables indoor location-based services. At the same time, such positioning has yet to be achieved. Much research uses commercial, off-the-shelf 802.11 (Wi-Fi) hardware for indoor positioning....... In particular, the dominant fingerprinting technique uses a database (called a radio map) of manually collected Wi-Fi signal strengths and is able to achieve positioning accuracies that enable a wide range of location-based services. However, a major weakness of fingerprinting occurs when changes occur...

  4. Locating cloud-to-ground lightning return strokes by a neural network algorithm

    International Nuclear Information System (INIS)

    2001-01-01

    A neuro-based approach is proposed for locating cloud-to-ground lightning strokes. Due to insufficient experimental data, we have use the results of an electromagnetic simulator for training the developed artificial neural network. The simulator utilizes the well-known transmission line and is capable of predicting the electromagnetic field due to a return stroke channel for various parameters associated with the shape of the channel base-current. The training process has been successfully done using the Levenberg-Marquard technique. The simulation results demonstrate that the return stroke channel locations can be predicted with an absolute error not greater than 1 km for return stroke channels located within 80 km of a lightning detection station

  5. Optimising Aesthetic Reconstruction of Scalp Soft Tissue by an Algorithm Based on Defect Size and Location.

    Science.gov (United States)

    Ooi, Adrian Sh; Kanapathy, Muholan; Ong, Yee Siang; Tan, Kok Chai; Tan, Bien Keem

    2015-11-01

    Scalp soft tissue defects are common and result from a variety of causes. Reconstructive methods should maximise cosmetic outcomes by maintaining hair-bearing tissue and aesthetic hairlines. This article outlines an algorithm based on a diverse clinical case series to optimise scalp soft tissue coverage. A retrospective analysis of scalp soft tissue reconstruction cases performed at the Singapore General Hospital between January 2004 and December 2013 was conducted. Forty-one patients were included in this study. The majority of defects aesthetic outcome while minimising complications and repeat procedures.

  6. Study on Practical Application of Turboprop Engine Condition Monitoring and Fault Diagnostic System Using Fuzzy-Neuro Algorithms

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong; Kim, Keunwoo

    2013-03-01

    The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.

  7. A spectral clustering search algorithm for predicting shallow landslide size and location

    Science.gov (United States)

    Dino Bellugi; David G. Milledge; William E. Dietrich; Jim A. McKean; J. Taylor Perron; Erik B. Sudderth; Brian Kazian

    2015-01-01

    The potential hazard and geomorphic significance of shallow landslides depend on their location and size. Commonly applied one-dimensional stability models do not include lateral resistances and cannot predict landslide size. Multi-dimensional models must be applied to specific geometries, which are not known a priori, and testing all possible geometries is...

  8. MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks

    Science.gov (United States)

    Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna

    In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system

  9. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  10. Using a Combined Platform of Swarm Intelligence Algorithms and GIS to Provide Land Suitability Maps for Locating Cardiac Rehabilitation Defibrillators

    Science.gov (United States)

    KAFFASH-CHARANDABI, Neda; SADEGHI-NIARAKI, Abolghasem; PARK, Dong-Kyun

    2015-01-01

    Background: Cardiac arrest is a condition in which the heart is completely stopped and is not pumping any blood. Although most cardiac arrest cases are reported from homes or hospitals, about 20% occur in public areas. Therefore, these areas need to be investigated in terms of cardiac arrest incidence so that places of high incidence can be identified and cardiac rehabilitation defibrillators installed there. Methods: In order to investigate a study area in Petersburg, Pennsylvania State, and to determine appropriate places for installing defibrillators with 5-year period data, swarm intelligence algorithms were used. Moreover, the location of the defibrillators was determined based on the following five evaluation criteria: land use, altitude of the area, economic conditions, distance from hospitals and approximate areas of reported cases of cardiac arrest for public places that were created in geospatial information system (GIS). Results: The A-P HADEL algorithm results were more precise about 27.36%. The validation results indicated a wider coverage of real values and the verification results confirmed the faster and more exact optimization of the cost function in the PSO method. Conclusion: The study findings emphasize the necessity of applying optimal optimization methods along with GIS and precise selection of criteria in the selection of optimal locations for installing medical facilities because the selected algorithm and criteria dramatically affect the final responses. Meanwhile, providing land suitability maps for installing facilities across hot and risky spots has the potential to save many lives. PMID:26587471

  11. Single terminal fault location by natural frequencies of travelling wave considering multiple harmonics%一种考虑多次谐波的行波自然频率测距方法

    Institute of Scientific and Technical Information of China (English)

    李金泽; 李宝才; 翟学明

    2016-01-01

    在基于行波自然频率的输电线路单端故障定位方法中,主自然频率值的准确度是进行故障点精确定位的关键。目前的主自然频率的提取大多采用小波变换、MUSIC 方法,小波分析受所选小波基影响较大,MUSIC 的参数选择对频谱估计影响较大,它们都未能很好地解决这一问题。提出一种基于故障线路自然频率的单端测距新方法。该方法在提取主自然频率过程中首先对行波信号进行 EEMD 分解,并用 ICA 方法进行正交化处理,从而抑制 WVD 本身存在交叉项的问题,然后对各个分量进行 WVD 转换并叠加,获得正交的自然频率谱;进而综合考虑基波和多次谐波求取全局主自然频率。EMTDC 仿真实验验证了该算法在不同故障类型、故障距离、过渡电阻和噪声情况下的可行性及其精度。%In the single terminal fault locating method of transmission line based on traveling wave natural frequency, the accuracy of extracting primary natural frequency is the key to caring out to pinpoint trouble spots in. Currently, wavelet transform and MUSIC method are commonly used for extracting primary natural frequency. Wavelet analysis is influenced by the selected wavelets and the parameters’ selection greatly impacts spectral estimation in MUSIC, which can’t solve this problem well. A new single ended fault location method of extracting faulted line natural frequencies is described. The traveling wave signal is decomposed by EEMD and orthogonal process is made with ICA method to suppress the WVD's problem of cross-term, and then each component of WVD is converted and superimposed to obtain the natural frequency spectrum orthogonal. Then the global primary natural frequency is obtained considering the fundamental and harmonics. Simulation experiment by EMTDC confirms the feasibility and accuracy of the proposed algorithm under different fault types, fault distance, transition resistance

  12. One Terminal Digital Algorithm for Adaptive Single Pole Auto-Reclosing Based on Zero Sequence Voltage

    Directory of Open Access Journals (Sweden)

    S. Jamali

    2008-10-01

    Full Text Available This paper presents an algorithm for adaptive determination of the dead timeduring transient arcing faults and blocking automatic reclosing during permanent faults onoverhead transmission lines. The discrimination between transient and permanent faults ismade by the zero sequence voltage measured at the relay point. If the fault is recognised asan arcing one, then the third harmonic of the zero sequence voltage is used to evaluate theextinction time of the secondary arc and to initiate reclosing signal. The significantadvantage of this algorithm is that it uses an adaptive threshold level and therefore itsperformance is independent of fault location, line parameters and the system operatingconditions. The proposed algorithm has been successfully tested under a variety of faultlocations and load angles on a 400KV overhead line using Electro-Magnetic TransientProgram (EMTP. The test results validate the algorithm ability in determining thesecondary arc extinction time during transient faults as well as blocking unsuccessfulautomatic reclosing during permanent faults.

  13. A subgradient-based branch-and-bound algorithm for the capacitated facility location problem

    DEFF Research Database (Denmark)

    Görtz, Simon; Klose, Andreas

    This paper presents a simple branch-and-bound method based on Lagrangean relaxation and subgradient optimization for solving large instances of the capacitated facility location problem (CFLP) to optimality. In order to guess a primal solution to the Lagrangean dual, we average solutions to the L......This paper presents a simple branch-and-bound method based on Lagrangean relaxation and subgradient optimization for solving large instances of the capacitated facility location problem (CFLP) to optimality. In order to guess a primal solution to the Lagrangean dual, we average solutions...... to the Lagrangean subproblem. Branching decisions are then based on this estimated (fractional) primal solution. Extensive numerical results reveal that the method is much more faster and robust than other state-of-the-art methods for solving the CFLP exactly....

  14. MECH: Algorithms and Tools for Automated Assessment of Potential Attack Locations

    Science.gov (United States)

    2015-10-06

    conscious and subconscious processing of the geometric structure of the local terrain, sight lines to prominent or useful terrain features, proximity...This intuition or instinct is the outcome of an unconscious or subconscious integration of available facts and impressions. Thus, in the search...adjacency. Even so, we inevitably introduce a bias between events and non-event road locations when calculating the route visibility features. 63

  15. An Improved Genetic Algorithm for Optimal Stationary Energy Storage System Locating and Sizing

    OpenAIRE

    Bin Wang; Zhongping Yang; Fei Lin; Wei Zhao

    2014-01-01

    The application of a stationary ultra-capacitor energy storage system (ESS) in urban rail transit allows for the recuperation of vehicle braking energy for increasing energy savings as well as for a better vehicle voltage profile. This paper aims to obtain the best energy savings and voltage profile by optimizing the location and size of ultra-capacitors. This paper firstly raises the optimization objective functions from the perspectives of energy savings, regenerative braking cancellation a...

  16. A simple but usually fast branch-and-bound algorithm for the capacitated facility location problem

    DEFF Research Database (Denmark)

    Görtz, Simon; Klose, Andreas

    2012-01-01

    This paper presents a simple branch-and-bound method based on Lagrangean relaxation and subgradient optimization for solving large instances of the capacitated facility location problem (CFLP) to optimality. To guess a primal solution to the Lagrangean dual, we average solutions to the Lagrangean...... subproblem. Branching decisions are then based on this estimated (fractional) primal solution. Extensive numerical results reveal that the method is much faster and more robust than other state-of-the-art methods for solving the CFLP exactly....

  17. Location verification algorithm of wearable sensors for wireless body area networks.

    Science.gov (United States)

    Wang, Hua; Wen, Yingyou; Zhao, Dazhe

    2018-01-01

    Knowledge of the location of sensor devices is crucial for many medical applications of wireless body area networks, as wearable sensors are designed to monitor vital signs of a patient while the wearer still has the freedom of movement. However, clinicians or patients can misplace the wearable sensors, thereby causing a mismatch between their physical locations and their correct target positions. An error of more than a few centimeters raises the risk of mistreating patients. The present study aims to develop a scheme to calculate and detect the position of wearable sensors without beacon nodes. A new scheme was proposed to verify the location of wearable sensors mounted on the patient's body by inferring differences in atmospheric air pressure and received signal strength indication measurements from wearable sensors. Extensive two-sample t tests were performed to validate the proposed scheme. The proposed scheme could easily recognize a 30-cm horizontal body range and a 65-cm vertical body range to correctly perform sensor localization and limb identification. All experiments indicate that the scheme is suitable for identifying wearable sensor positions in an indoor environment.

  18. A Genetic-Firefly Hybrid Algorithm to Find the Best Data Location in a Data Cube

    Directory of Open Access Journals (Sweden)

    M. Faridi Masouleh

    2016-10-01

    Full Text Available Decision-based programs include large-scale complex database queries. If the response time is short, query optimization is critical. Users usually observe data as a multi-dimensional data cube. Each data cube cell displays data as an aggregation in which the number of cells depends on the number of other cells in the cube. At any given time, a powerful query optimization method can visualize part of the cells instead of calculating results from raw data. Business systems use different approaches and positioning of data in the data cube. In the present study, the data is trained by a neural network and a genetic-firefly hybrid algorithm is proposed for finding the best position for the data in the cube.

  19. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  20. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  1. Determining on-fault earthquake magnitude distributions from integer programming

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2018-01-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions. 

  2. New var reconstruction algorithm exposes high var sequence diversity in a single geographic location in Mali.

    Science.gov (United States)

    Dara, Antoine; Drábek, Elliott F; Travassos, Mark A; Moser, Kara A; Delcher, Arthur L; Su, Qi; Hostelley, Timothy; Coulibaly, Drissa; Daou, Modibo; Dembele, Ahmadou; Diarra, Issa; Kone, Abdoulaye K; Kouriba, Bourema; Laurens, Matthew B; Niangaly, Amadou; Traore, Karim; Tolo, Youssouf; Fraser, Claire M; Thera, Mahamadou A; Djimde, Abdoulaye A; Doumbo, Ogobara K; Plowe, Christopher V; Silva, Joana C

    2017-03-28

    Encoded by the var gene family, highly variable Plasmodium falciparum erythrocyte membrane protein-1 (PfEMP1) proteins mediate tissue-specific cytoadherence of infected erythrocytes, resulting in immune evasion and severe malaria disease. Sequencing and assembling the 40-60 var gene complement for individual infections has been notoriously difficult, impeding molecular epidemiological studies and the assessment of particular var elements as subunit vaccine candidates. We developed and validated a novel algorithm, Exon-Targeted Hybrid Assembly (ETHA), to perform targeted assembly of var gene sequences, based on a combination of Pacific Biosciences and Illumina data. Using ETHA, we characterized the repertoire of var genes in 12 samples from uncomplicated malaria infections in children from a single Malian village and showed them to be as genetically diverse as vars from isolates from around the globe. The gene var2csa, a member of the var family associated with placental malaria pathogenesis, was present in each genome, as were vars previously associated with severe malaria. ETHA, a tool to discover novel var sequences from clinical samples, will aid the understanding of malaria pathogenesis and inform the design of malaria vaccines based on PfEMP1. ETHA is available at: https://sourceforge.net/projects/etha/ .

  3. Application of a Dynamic Fuzzy Search Algorithm to Determine Optimal Wind Plant Sizes and Locations in Iowa

    International Nuclear Information System (INIS)

    Milligan, M. R.; Factor, T.

    2001-01-01

    This paper illustrates a method for choosing the optimal mix of wind capacity at several geographically dispersed locations. The method is based on a dynamic fuzzy search algorithm that can be applied to different optimization targets. We illustrate the method using two objective functions for the optimization: maximum economic benefit and maximum reliability. We also illustrate the sensitivity of the fuzzy economic benefit solutions to small perturbations of the capacity selections at each wind site. We find that small changes in site capacity and/or location have small effects on the economic benefit provided by wind power plants. We use electric load and generator data from Iowa, along with high-quality wind-speed data collected by the Iowa Wind Energy Institute

  4. Application of a Dynamic Fuzzy Search Algorithm to Determine Optimal Wind Plant Sizes and Locations in Iowa

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M. R., National Renewable Energy Laboratory; Factor, T., Iowa Wind Energy Institute

    2001-09-21

    This paper illustrates a method for choosing the optimal mix of wind capacity at several geographically dispersed locations. The method is based on a dynamic fuzzy search algorithm that can be applied to different optimization targets. We illustrate the method using two objective functions for the optimization: maximum economic benefit and maximum reliability. We also illustrate the sensitivity of the fuzzy economic benefit solutions to small perturbations of the capacity selections at each wind site. We find that small changes in site capacity and/or location have small effects on the economic benefit provided by wind power plants. We use electric load and generator data from Iowa, along with high-quality wind-speed data collected by the Iowa Wind Energy Institute.

  5. Planetary Gearbox Fault Detection Using Vibration Separation Techniques

    Science.gov (United States)

    Lewicki, David G.; LaBerge, Kelsen E.; Ehinger, Ryan T.; Fetty, Jason

    2011-01-01

    Studies were performed to demonstrate the capability to detect planetary gear and bearing faults in helicopter main-rotor transmissions. The work supported the Operations Support and Sustainment (OSST) program with the U.S. Army Aviation Applied Technology Directorate (AATD) and Bell Helicopter Textron. Vibration data from the OH-58C planetary system were collected on a healthy transmission as well as with various seeded-fault components. Planetary fault detection algorithms were used with the collected data to evaluate fault detection effectiveness. Planet gear tooth cracks and spalls were detectable using the vibration separation techniques. Sun gear tooth cracks were not discernibly detectable from the vibration separation process. Sun gear tooth spall defects were detectable. Ring gear tooth cracks were only clearly detectable by accelerometers located near the crack location or directly across from the crack. Enveloping provided an effective method for planet bearing inner- and outer-race spalling fault detection.

  6. An algorithm to locate optimal bond breaking points on a potential energy surface for applications in mechanochemistry and catalysis.

    Science.gov (United States)

    Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang

    2017-10-21

    The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ-function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.

  7. Optimal Energy Management, Location and Size for Stationary Energy Storage System in a Metro Line Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Huan Xia

    2015-10-01

    Full Text Available The installation of stationary super-capacitor energy storage system (ESS in metro systems can recycle the vehicle braking energy and improve the pantograph voltage profile. This paper aims to optimize the energy management, location, and size of stationary super-capacitor ESSes simultaneously and obtain the best economic efficiency and voltage profile of metro systems. Firstly, the simulation platform of an urban rail power supply system, which includes trains and super-capacitor energy storage systems, is established. Then, two evaluation functions from the perspectives of economic efficiency and voltage drop compensation are put forward. Ultimately, a novel optimization method that combines genetic algorithms and a simulation platform of urban rail power supply system is proposed, which can obtain the best energy management strategy, location, and size for ESSes simultaneously. With actual parameters of a Chinese metro line applied in the simulation comparison, certain optimal scheme of ESSes’ energy management strategy, location, and size obtained by a novel optimization method can achieve much better performance of metro systems from the perspectives of two evaluation functions. The simulation result shows that with the increase of weight coefficient, the optimal energy management strategy, locations and size of ESSes appear certain regularities, and the best compromise between economic efficiency and voltage drop compensation can be obtained by a novel optimization method, which can provide a valuable reference to subway company.

  8. A 3D modeling approach to complex faults with multi-source data

    Science.gov (United States)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  9. Fault Detection for Industrial Processes

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    2012-01-01

    Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

  10. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  11. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  12. Automated phase picker and source location algorithm for local distances using a single three component seismic station

    International Nuclear Information System (INIS)

    Saari, J.

    1989-12-01

    The paper describes procedures for automatic location of local events by using single-site, three-component (3c) seismogram records. Epicentral distance is determined from the time difference between P- and S-onsets. For onset time estimates a special phase picker algorithm is introduced. Onset detection is accomplished by comparing short-term average with long-term average after multiplication of north, east and vertical components of recording. For epicentral distances up to 100 km, errors seldom exceed 5 km. The slowness vector, essentially the azimuth, is estimated independently by using the Christoffersson et al. (1988) 'polarization' technique, although a priori knowledge of the P-onset time gives the best results. Differences between 'true' and observed azimuths are generally less than 12 deg C. Practical examples are given by demonstrating the viability of the procedures for automated 3c seismogram analysis. The results obtained compare favourably with those achieved by a miniarray of three stations. (orig.)

  13. Fault location in optical networks

    Science.gov (United States)

    Stevens, Rick C [Apple Valley, MN; Kryzak, Charles J [Mendota Heights, MN; Keeler, Gordon A [Albuquerque, NM; Serkland, Darwin K [Albuquerque, NM; Geib, Kent M [Tijeras, NM; Kornrumpf, William P [Schenectady, NY

    2008-07-01

    One apparatus embodiment includes an optical emitter and a photodetector. At least a portion of the optical emitter extends a radial distance from a center point. The photodetector provided around at least a portion of the optical emitter and positioned outside the radial distance of the portion of the optical emitter.

  14. A seismic fault recognition method based on ant colony optimization

    Science.gov (United States)

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  15. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  16. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  17. Fault Diagnosis of Power System Based on Improved Genetic Optimized BP-NN

    Directory of Open Access Journals (Sweden)

    Yuan Pu

    2015-01-01

    Full Text Available BP neural network (Back-Propagation Neural Network, BP-NN is one of the most widely neural network models and is applied to fault diagnosis of power system currently. BP neural network has good self-learning and adaptive ability and generalization ability, but the operation process is easy to fall into local minima. Genetic algorithm has global optimization features, and crossover is the most important operation of the Genetic Algorithm. In this paper, we can modify the crossover of traditional Genetic Algorithm, using improved genetic algorithm optimized BP neural network training initial weights and thresholds, to avoid the problem of BP neural network fall into local minima. The results of analysis by an example, the method can efficiently diagnose network fault location, and improve fault-tolerance and grid fault diagnosis effect.

  18. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  19. Location, location, location

    NARCIS (Netherlands)

    Anderson, S.P.; Goeree, J.K.; Ramer, R.

    1997-01-01

    We analyze the canonical location-then-price duopoly game with general log- concave consumer densities. A unique pure-strategy equilibrium to the two-stage game exists if the density is not "too asymmetric" and not "too concave." These criteria are satisfied by many commonly used densities.

  20. Genetic Algorithm for Solving Location Problem in a Supply Chain Network with Inbound and Outbound Product Flows

    Directory of Open Access Journals (Sweden)

    Suprayogi Suprayogi

    2016-12-01

    Full Text Available This paper considers a location problem in a supply chain network. The problem addressed in this paper is motivated by an initiative to develop an efficient supply chain network for supporting the agricultural activities. The supply chain network consists of regions, warehouses, distribution centers, plants, and markets. The products include a set of inbound products and a set of outbound products. In this paper, definitions of the inbound and outbound products are seen from the region’s point of view.  The inbound product is the product demanded by regions and produced by plants which flows on a sequence of the following entities: plants, distribution centers, warehouses, and regions. The outbound product is the product demanded by markets and produced by regions and it flows on a sequence of the following entities: regions, warehouses, and markets. The problem deals with determining locations of the warehouses and the distribution centers to be opened and shipment quantities associated with all links on the network that minimizes the total cost. The problem can be considered as a strategic supply chain network problem. A solution approach based on genetic algorithm (GA is proposed. The proposed GA is examined using hypothetical instances and its results are compared to the solution obtained by solving the mixed integer linear programming (MILP model. The comparison shows that there is a small gap (0.23%, on average between the proposed GA and MILP model in terms of the total cost. The proposed GA consistently provides solutions with least total cost. In terms of total cost, based on the experiment, it is demonstrated that coefficients of variation are closed to 0.

  1. Bearing Fault Diagnosis under Variable Speed Using Convolutional Neural Networks and the Stochastic Diagonal Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Viet Tra

    2017-12-01

    Full Text Available This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs trained via the stochastic diagonal Levenberg-Marquardt (S-DLM algorithm. The CNNs utilize the spectral energy maps (SEMs of the acoustic emission (AE signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds.

  2. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    Science.gov (United States)

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements

  3. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  4. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  5. Fault isolatability conditions for linear systems

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Henrik

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...... the faults have occurred. The last step is a fault isolation (FI) of the faults occurring in a specific fault set, i.e. equivalent with the standard FI step. A simple example demonstrates how to turn the algebraic necessary and sufficient conditions into explicit algorithms for designing filter banks, which...

  6. Product quality management based on CNC machine fault prognostics and diagnosis

    Science.gov (United States)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  7. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  8. Optimum location of external markers using feature selection algorithms for real-time tumor tracking in external-beam radiotherapy: a virtual phantom study.

    Science.gov (United States)

    Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-08

    In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in

  9. Optimum location of external markers using feature selection algorithms for real‐time tumor tracking in external‐beam radiotherapy: a virtual phantom study

    Science.gov (United States)

    Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-01

    In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature

  10. Quaternary Fault Lines

    Data.gov (United States)

    Department of Homeland Security — This data set contains locations and information on faults and associated folds in the United States that are believed to be sources of M>6 earthquakes during the...

  11. Using the hybrid fuzzy goal programming model and hybrid genetic algorithm to solve a multi-objective location routing problem for infectious waste disposaL

    Energy Technology Data Exchange (ETDEWEB)

    Wichapa, Narong; Khokhajaikiat, Porntep

    2017-07-01

    Disposal of infectious waste remains one of the most serious problems in the social and environmental domains of almost every nation. Selection of new suitable locations and finding the optimal set of transport routes to transport infectious waste, namely location routing problem for infectious waste disposal, is one of the major problems in hazardous waste management. Design/methodology/approach: Due to the complexity of this problem, location routing problem for a case study, forty hospitals and three candidate municipalities in sub-Northeastern Thailand, was divided into two phases. The first phase is to choose suitable municipalities using hybrid fuzzy goal programming model which hybridizes the fuzzy analytic hierarchy process and fuzzy goal programming. The second phase is to find the optimal routes for each selected municipality using hybrid genetic algorithm which hybridizes the genetic algorithm and local searches including 2-Opt-move, Insertion-move and ?-interchange-move. Findings: The results indicate that the hybrid fuzzy goal programming model can guide the selection of new suitable municipalities, and the hybrid genetic algorithm can provide the optimal routes for a fleet of vehicles effectively. Originality/value: The novelty of the proposed methodologies, hybrid fuzzy goal programming model, is the simultaneous combination of both intangible and tangible factors in order to choose new suitable locations, and the hybrid genetic algorithm can be used to determine the optimal routes which provide a minimum number of vehicles and minimum transportation cost under the actual situation, efficiently.

  12. Using the hybrid fuzzy goal programming model and hybrid genetic algorithm to solve a multi-objective location routing problem for infectious waste disposaL

    International Nuclear Information System (INIS)

    Wichapa, Narong; Khokhajaikiat, Porntep

    2017-01-01

    Disposal of infectious waste remains one of the most serious problems in the social and environmental domains of almost every nation. Selection of new suitable locations and finding the optimal set of transport routes to transport infectious waste, namely location routing problem for infectious waste disposal, is one of the major problems in hazardous waste management. Design/methodology/approach: Due to the complexity of this problem, location routing problem for a case study, forty hospitals and three candidate municipalities in sub-Northeastern Thailand, was divided into two phases. The first phase is to choose suitable municipalities using hybrid fuzzy goal programming model which hybridizes the fuzzy analytic hierarchy process and fuzzy goal programming. The second phase is to find the optimal routes for each selected municipality using hybrid genetic algorithm which hybridizes the genetic algorithm and local searches including 2-Opt-move, Insertion-move and ?-interchange-move. Findings: The results indicate that the hybrid fuzzy goal programming model can guide the selection of new suitable municipalities, and the hybrid genetic algorithm can provide the optimal routes for a fleet of vehicles effectively. Originality/value: The novelty of the proposed methodologies, hybrid fuzzy goal programming model, is the simultaneous combination of both intangible and tangible factors in order to choose new suitable locations, and the hybrid genetic algorithm can be used to determine the optimal routes which provide a minimum number of vehicles and minimum transportation cost under the actual situation, efficiently.

  13. Using the hybrid fuzzy goal programming model and hybrid genetic algorithm to solve a multi-objective location routing problem for infectious waste disposal

    Directory of Open Access Journals (Sweden)

    Narong Wichapa

    2017-11-01

    Originality/value: The novelty of the proposed methodologies, hybrid fuzzy goal programming model, is the simultaneous combination of both intangible and tangible factors in order to choose new suitable locations, and the hybrid genetic algorithm can be used to determine the optimal routes which provide a minimum number of vehicles and minimum transportation cost under the actual situation, efficiently.

  14. A Two-Stage Algorithm for the Closed-Loop Location-Inventory Problem Model Considering Returns in E-Commerce

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2014-01-01

    Full Text Available Facility location and inventory control are critical and highly related problems in the design of logistics system for e-commerce. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Focusing on the existing problem in e-commerce logistics system, we formulate a closed-loop location-inventory problem model considering returned merchandise to minimize the total cost which is produced in both forward and reverse logistics networks. To solve this nonlinear mixed programming model, an effective two-stage heuristic algorithm named LRCAC is designed by combining Lagrangian relaxation with ant colony algorithm (AC. Results of numerical examples show that LRCAC outperforms ant colony algorithm (AC on optimal solution and computing stability. The proposed model is able to help managers make the right decisions under e-commerce environment.

  15. Improvements in seismic event locations in a deep western U.S. coal mine using tomographic velocity models and an evolutionary search algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Adam Lurka; Peter Swanson [Central Mining Institute, Katowice (Poland)

    2009-09-15

    Methods of improving seismic event locations were investigated as part of a research study aimed at reducing ground control safety hazards. Seismic event waveforms collected with a 23-station three-dimensional sensor array during longwall coal mining provide the data set used in the analyses. A spatially variable seismic velocity model is constructed using seismic event sources in a passive tomographic method. The resulting three-dimensional velocity model is used to relocate seismic event positions. An evolutionary optimization algorithm is implemented and used in both the velocity model development and in seeking improved event location solutions. Results obtained using the different velocity models are compared. The combination of the tomographic velocity model development and evolutionary search algorithm provides improvement to the event locations. 13 refs., 5 figs., 4 tabs.

  16. The Fault Detection, Localization, and Tolerant Operation of Modular Multilevel Converters with an Insulated Gate Bipolar Transistor (IGBT Open Circuit Fault

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-04-01

    Full Text Available Reliability is one of the critical issues for a modular multilevel converter (MMC since it consists of a large number of series-connected power electronics submodules (SMs. In this paper, a complete control strategy including fault detection, localization, and tolerant operation is proposed for the MMC under an insulated gate bipolar transistor (IGBT open circuit fault. According to the output characteristics of the SM with the open-circuit fault of IGBT, a fault detection method based on the circulating current and output current observation is used. In order to further precisely locate the position of the faulty SM, a fault localization method based on the SM capacitor voltage observation is developed. After the faulty SM is isolated, the continuous operation of the converter is ensured by adopting the fault-tolerant strategy based on the use of redundant modules. To verify the proposed fault detection, fault localization, and fault-tolerant operation strategies, a 900 kVA MMC system under the conditions of an IGBT open circuit is developed in the Matlab/Simulink platform. The capabilities of rapid detection, precise positioning, and fault-tolerant operation of the investigated detection and control algorithms are also demonstrated.

  17. Fault Analysis in Cryptography

    CERN Document Server

    Joye, Marc

    2012-01-01

    In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

  18. Development of double-pair double difference location algorithm and its application to the regular earthquakes and non-volcanic tremors

    Science.gov (United States)

    Guo, H.; Zhang, H.

    2016-12-01

    Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data

  19. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    Directory of Open Access Journals (Sweden)

    Mehdi Shadaram

    2010-10-01

    Full Text Available In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  20. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    Science.gov (United States)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  1. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  2. Application of Hybrid HS and Tabu Search Algorithm for Optimal Location of FACTS Devices to Reduce Power Losses in Power Systems

    Directory of Open Access Journals (Sweden)

    Z. Masomi Zohrabad

    2016-12-01

    Full Text Available Power networks continue to grow following the annual growth of energy demand. As constructing new energy generation facilities bears a high cost, minimizing power grid losses becomes essential to permit low cost energy transmission in larger distances and additional areas. This study aims to model an optimization problem for an IEEE 30-bus power grid using a Tabu search algorithm based on an improved hybrid Harmony Search (HS method to reduce overall grid losses. The proposed algorithm is applied to find the best location for the installation of a Unified Power Flow Controller (UPFC. The results obtained from installation of the UPFC in the grid are presented by displaying outputs.

  3. Solving a Closed-Loop Location-Inventory-Routing Problem with Mixed Quality Defects Returns in E-Commerce by Hybrid Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuai Deng

    2016-01-01

    Full Text Available This paper presents a closed-loop location-inventory-routing problem model considering both quality defect returns and nondefect returns in e-commerce supply chain system. The objective is to minimize the total cost produced in both forward and reverse logistics networks. We propose a combined optimization algorithm named hybrid ant colony optimization algorithm (HACO to address this model that is an NP-hard problem. Our experimental results show that the proposed HACO is considerably efficient and effective in solving this model.

  4. Crustal structure of the Ionian basin and eastern Sicily margin : results from a wide angle seismic survey and implication for the crustal nature and origin of the basin, and the recent tear fault location

    Science.gov (United States)

    Gutscher, M. A.; Dellong, D.; Klingelhoefer, F.; Kopp, H.; Graindorge, D.; Margheriti, L.; Moretti, M.

    2017-12-01

    In the Ionian Sea (Central Mediterranean) the slow convergence between Africa and Eurasia results in the formation of a narrow subduction zone. The nature of the crust and lithosphere of the subducting plate remain debated and could represent the last remnants of the Neo-Tethys ocean. The rifting mechanism that produced the Ionian basin are also still under discussion with the Malta escarpment representing a possible remnant of this opening. At present, this subduction is still retreating to the south-east (motion occurring since the last 35 Ma) but is confined to the narrow Ionian Basin. In order to accommodate slab roll-back, a major lateral slab tear fault is required. This fault is thought to propagate along the eastern Sicily margin but its precise location remains controversial. This study focuses on the deep crustal structure of the Eastern-Sicily margin and the Malta Escarpment by presenting two wide-angle velocity profiles crossing these structures roughly orthogonally. The data used for the forward velocity modeling were acquired onboard the R/V Meteor during the DIONYSUS cruise in 2014. The results image an oceanic crust within the Ionian basin as well as the deep structure of the Malta Escarpment which presents characteristics of a transform margin. A deep and asymmetrical sedimentary basin is imaged south of the Messina strait and seems to have opened in between the Calabrian and Peloritan continental terranes. The interpretation of the velocity models suggests that the tear fault is located east of the Malta Escarpment, along the Alfeo fault system.

  5. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    Science.gov (United States)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  6. Multiobjective optimization of strategies for operation and testing of low-demand safety instrumented systems using a genetic algorithm and fault trees

    International Nuclear Information System (INIS)

    Longhi, Antonio Eduardo Bier; Pessoa, Artur Alves; Garcia, Pauli Adriano de Almada

    2015-01-01

    Since low-demand safety instrumented systems (SISs) do not operate continuously, their failures are often only detected when the system is demanded or tested. The conduction of tests, besides adding costs, can raise risks of failure on demand during their execution and also increase the frequency of spurious activation. Additionally, it is often necessary to interrupt production to carry out tests. In light of this scenario, this paper presents a model to optimize strategies for operation and testing of these systems, applying modeling by fault trees associated with optimization by a genetic algorithm. Its main differences are: (i) ability to represent four modes of operation and test them for each SIS subsystem; (ii) ability to represent a SIS that executes more than one safety instrumented function; (iii) ability to keep track of the down-time generated in the production system; and (iv) alteration of a genetic selection mechanism that permits identification of more efficient solutions with smaller influence on the optimization parameters. These aspects are presented by applying this model in three case studies. The results obtained show the applicability of the proposed approach and its potential to help make more informed decisions. - Highlights: • Models the integrity and cost related to operation and testing of low-demand SISs. • Keeps track of the production down-time generated by SIS tests and repairs. • Allows multiobjective optimization to identify operation and testing strategies. • Enables integrated assessment of an SIS that executes more than one SIF. • Allows altering the selection mechanism to identify the most efficient strategies

  7. Multiple resolution chirp reflectometry for fault localization and diagnosis in a high voltage cable in automotive electronics

    Science.gov (United States)

    Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae

    2016-12-01

    A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.

  8. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  9. A Real-Time Location-Based Services System Using WiFi Fingerprinting Algorithm for Safety Risk Assessment of Workers in Tunnels

    Directory of Open Access Journals (Sweden)

    Peng Lin

    2014-01-01

    Full Text Available This paper investigates the feasibility of a real-time tunnel location-based services (LBS system to provide workers’ safety protection and various services in concrete dam site. In this study, received signal strength- (RSS- based location using fingerprinting algorithm and artificial neural network (ANN risk assessment is employed for position analysis. This tunnel LBS system achieves an online, real-time, intelligent tracking identification feature, and the on-site running system has many functions such as worker emergency call, track history, and location query. Based on ANN with a strong nonlinear mapping, and large-scale parallel processing capabilities, proposed LBS system is effective to evaluate the risk management on worker safety. The field implementation shows that the proposed location algorithm is reliable and accurate (3 to 5 meters enough for providing real-time positioning service. The proposed LBS system is demonstrated and firstly applied to the second largest hydropower project in the world, to track workers on tunnel site and assure their safety. The results show that the system is simple and easily deployed.

  10. 40 CFR 258.13 - Fault areas.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in Holocene...

  11. An efficient diagnostic technique for distribution systems based on under fault voltages and currents

    Energy Technology Data Exchange (ETDEWEB)

    Campoccia, A.; Di Silvestre, M.L.; Incontrera, I.; Riva Sanseverino, E. [Dipartimento di Ingegneria Elettrica elettronica e delle Telecomunicazioni, Universita degli Studi di Palermo, viale delle Scienze, 90128 Palermo (Italy); Spoto, G. [Centro per la Ricerca Elettronica in Sicilia, Monreale, Via Regione Siciliana 49, 90046 Palermo (Italy)

    2010-10-15

    Service continuity is one of the major aspects in the definition of the quality of the electrical energy, for this reason the research in the field of faults diagnostic for distribution systems is spreading ever more. Moreover the increasing interest around modern distribution systems automation for management purposes gives faults diagnostics more tools to detect outages precisely and in short times. In this paper, the applicability of an efficient fault location and characterization methodology within a centralized monitoring system is discussed. The methodology, appropriate for any kind of fault, is based on the use of the analytical model of the network lines and uses the fundamental components rms values taken from the transient measures of line currents and voltages at the MV/LV substations. The fault location and identification algorithm, proposed by the authors and suitably restated, has been implemented on a microprocessor-based device that can be installed at each MV/LV substation. The speed and precision of the algorithm have been tested against the errors deriving from the fundamental extraction within the prescribed fault clearing times and against the inherent precision of the electronic device used for computation. The tests have been carried out using Matlab Simulink for simulating the faulted system. (author)

  12. Diagnosis and Tolerant Strategy of an Open-Switch Fault for T-type Three-Level Inverter Systems

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo Beum; Blaabjerg, Frede

    2014-01-01

    This paper proposes a new diagnosis method of an open-switch fault and fault-tolerant control strategy for T-type three-level inverter systems. The location of faulty switch can be identified by the average of normalized phase current and the change of the neutral-point voltage. The proposed fault......-tolerant strategy is explained by dividing into two cases: the faulty condition of half-bridge switches and the neutral-point switches. The performance of the T-type inverter system improves considerably by the proposed fault tolerant algorithm when a switch fails. The roposed method does not require additional...... components and complex calculations. Simulation and experimental results verify the feasibility of the proposed fault diagnosis and fault-tolerant control strategy....

  13. [Consideration of algorithms to presume the lesion location by using X-ray images of the stomach--geometric analysis of four direction radiography for the U region].

    Science.gov (United States)

    Henmi, Shuichi

    2013-01-01

    The author considered algorithms to presume the lesion location from a series of X-ray images obtained by four direction radiography without blind area for the U region of the stomach. The objects of analysis were six cases that protruding lesions were noticed in the U region. Firstly, from the length of short axis and measure of the lateral width of U region projected on the film, we presumed the length of longitudinal axis and angle between short axis and the film. Secondly, we calculated the rate of length to stomach walls from right side and left side of every image to the lateral width at the height passing through the center of the lesion. Using the lesion location calculated from these values, we presumed that the values that almost agreed between two images to be the lesion location. As the result of analysis, there were some cases that the lesion location could be presumed certainly or un-certainly, on the other hand, there were some cases that the lesion location could not be presumed. Since the form of the U region can be distorted by a change of position, or the angle between longitudinal axis and sagittal plane was changed, the error might have been made in calculation, and so it was considered that the lesion location could not be presumed.

  14. YF22 Model With On-Board On-Line Learning Microprocessors-Based Neural Algorithms for Autopilot and Fault-Tolerant Flight Control Systems

    National Research Council Canada - National Science Library

    Napolitano, Marcello

    2002-01-01

    This project focused on investigating the potential of on-line learning 'hardware-based' neural approximators and controllers to provide fault tolerance capabilities following sensor and actuator failures...

  15. An inverse model for locating skin tumours in 3D using the genetic algorithm with the Dual Reciprocity Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Fabrício Ribeiro Bueno

    Full Text Available Here, the Dual Reciprocity Boundary Element Method is used to solve the 3D Pennes Bioheat Equation, which together with a Genetic Algorithm, produces an inverse model capable of obtaining the location and the size of a tumour, having as data input the temperature distribution measured on the skin surface. Given that the objective function, which is solved inversely, involves the DRBEM (Dual Reciprocity Boundary Element Method the Genetic Algorithm in its usual form becomes slower, in such a way that it was necessary to develop functions based the solution history in order that the process becomes quicker and more accurate. Results for 8 examples are presented including cases with convection and radiation boundary conditions. Cases involving noise in the readings of the equipment are also considered. This technique is intended to assist health workers in the diagnosis of tumours.

  16. Solving the competitive facility location problem considering the reactions of competitor with a hybrid algorithm including Tabu Search and exact method

    Science.gov (United States)

    Bagherinejad, Jafar; Niknam, Azar

    2018-03-01

    In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.

  17. A Method for Aileron Actuator Fault Diagnosis Based on PCA and PGC-SVM

    Directory of Open Access Journals (Sweden)

    Wei-Li Qin

    2016-01-01

    Full Text Available Aileron actuators are pivotal components for aircraft flight control system. Thus, the fault diagnosis of aileron actuators is vital in the enhancement of the reliability and fault tolerant capability. This paper presents an aileron actuator fault diagnosis approach combining principal component analysis (PCA, grid search (GS, 10-fold cross validation (CV, and one-versus-one support vector machine (SVM. This method is referred to as PGC-SVM and utilizes the direct drive valve input, force motor current, and displacement feedback signal to realize fault detection and location. First, several common faults of aileron actuators, which include force motor coil break, sensor coil break, cylinder leakage, and amplifier gain reduction, are extracted from the fault quadrantal diagram; the corresponding fault mechanisms are analyzed. Second, the data feature extraction is performed with dimension reduction using PCA. Finally, the GS and CV algorithms are employed to train a one-versus-one SVM for fault classification, thus obtaining the optimal model parameters and assuring the generalization of the trained SVM, respectively. To verify the effectiveness of the proposed approach, four types of faults are introduced into the simulation model established by AMESim and Simulink. The results demonstrate its desirable diagnostic performance which outperforms that of the traditional SVM by comparison.

  18. Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    Directory of Open Access Journals (Sweden)

    Naseem Cassim

    2017-02-01

    Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  19. Helicopter Based Magnetic Detection Of Wells At The Teapot Dome (Naval Petroleum Reserve No. 3 Oilfield: Rapid And Accurate Geophysical Algorithms For Locating Wells

    Science.gov (United States)

    Harbert, W.; Hammack, R.; Veloski, G.; Hodge, G.

    2011-12-01

    In this study Airborne magnetic data was collected by Fugro Airborne Surveys from a helicopter platform (Figure 1) using the Midas II system over the 39 km2 NPR3 (Naval Petroleum Reserve No. 3) oilfield in east-central Wyoming. The Midas II system employs two Scintrex CS-2 cesium vapor magnetometers on opposite ends of a transversely mounted, 13.4-m long horizontal boom located amidships (Fig. 1). Each magnetic sensor had an in-flight sensitivity of 0.01 nT. Real time compensation of the magnetic data for magnetic noise induced by maneuvering of the aircraft was accomplished using two fluxgate magnetometers mounted just inboard of the cesium sensors. The total area surveyed was 40.5 km2 (NPR3) near Casper, Wyoming. The purpose of the survey was to accurately locate wells that had been drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood to enhance oil recovery, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells that are missing from the well database and to provide accurate locations for all wells. The well location method used combined an input dataset (for example, leveled total magnetic field reduced to the pole), combined with first and second horizontal spatial derivatives of this input dataset, which were then analyzed using focal statistics and finally combined using a fuzzy combination operation. Analytic signal and the Shi and Butt (2004) ZS attribute were also analyzed using this algorithm. A parameter could be adjusted to determine sensitivity. Depending on the input dataset 88% to 100% of the wells were located, with typical values being 95% to 99% for the NPR3 field site.

  20. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2013-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  1. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    Science.gov (United States)

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  2. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Science.gov (United States)

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  3. A Pseudo-Parallel Genetic Algorithm Integrating Simulated Annealing for Stochastic Location-Inventory-Routing Problem with Consideration of Returns in E-Commerce

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2015-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are three key issues to be settled in the design of logistics system for e-commerce. Due to the online shopping features of e-commerce, customer returns are becoming much more than traditional commerce. This paper studies a three-phase supply chain distribution system consisting of one supplier, a set of retailers, and a single type of product with continuous review (Q, r inventory policy. We formulate a stochastic location-inventory-routing problem (LIRP model with no quality defects returns. To solve the NP-hand problem, a pseudo-parallel genetic algorithm integrating simulated annealing (PPGASA is proposed. The computational results show that PPGASA outperforms GA on optimal solution, computing time, and computing stability.

  4. Fault-tolerant control for current sensors of doubly fed induction generators based on an improved fault detection method

    DEFF Research Database (Denmark)

    Li, Hui; Yang, Chao; Hu, Yaogang

    2014-01-01

    Fault-tolerant control of current sensors is studied in this paper to improve the reliability of a doubly fed induction generator (DFIG). A fault-tolerant control system of current sensors is presented for the DFIG, which consists of a new current observer and an improved current sensor fault...... detection algorithm, and fault-tolerant control system are investigated by simulation. The results indicate that the outputs of the observer and the sensor are highly coherent. The fault detection algorithm can efficiently detect both soft and hard faults in current sensors, and the fault-tolerant control...

  5. Nonlinear Model-Based Fault Detection for a Hydraulic Actuator

    NARCIS (Netherlands)

    Van Eykeren, L.; Chu, Q.P.

    2011-01-01

    This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost

  6. Fault Modeling and Testing for Analog Circuits in Complex Space Based on Supply Current and Output Voltage

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available This paper deals with the modeling of fault for analog circuits. A two-dimensional (2D fault model is first proposed based on collaborative analysis of supply current and output voltage. This model is a family of circle loci on the complex plane, and it simplifies greatly the algorithms for test point selection and potential fault simulations, which are primary difficulties in fault diagnosis of analog circuits. Furthermore, in order to reduce the difficulty of fault location, an improved fault model in three-dimensional (3D complex space is proposed, which achieves a far better fault detection ratio (FDR against measurement error and parametric tolerance. To address the problem of fault masking in both 2D and 3D fault models, this paper proposes an effective design for testability (DFT method. By adding redundant bypassing-components in the circuit under test (CUT, this method achieves excellent fault isolation ratio (FIR in ambiguity group isolation. The efficacy of the proposed model and testing method is validated through experimental results provided in this paper.

  7. S-velocity structure in Cimandiri fault zone derived from neighbourhood inversion of teleseismic receiver functions

    Science.gov (United States)

    Syuhada; Anggono, T.; Febriani, F.; Ramdhan, M.

    2018-03-01

    The availability information about realistic velocity earth model in the fault zone is crucial in order to quantify seismic hazard analysis, such as ground motion modelling, determination of earthquake locations and focal mechanism. In this report, we use teleseismic receiver function to invert the S-velocity model beneath a seismic station located in the Cimandiri fault zone using neighbourhood algorithm inversion method. The result suggests the crustal thickness beneath the station is about 32-38 km. Furthermore, low velocity layers with high Vp/Vs exists in the lower crust, which may indicate the presence of hot material ascending from the subducted slab.

  8. Fault Diagnosis and Fault Tolerant Control with Application on a Wind Turbine Low Speed Shaft Encoder

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Sardi, Hector Eloy Sanchez; Escobet, Teressa

    2015-01-01

    tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

  9. Optimal Installation Locations for Automated External Defibrillators in Taipei 7-Eleven Stores: Using GIS and a Genetic Algorithm with a New Stirring Operator

    Directory of Open Access Journals (Sweden)

    Chung-Yuan Huang

    2014-01-01

    Full Text Available Immediate treatment with an automated external defibrillator (AED increases out-of-hospital cardiac arrest (OHCA patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.

  10. Direct Index Method of Beam Damage Location Detection Based on Difference Theory of Strain Modal Shapes and the Genetic Algorithms Application

    Directory of Open Access Journals (Sweden)

    Bao Zhenming

    2012-01-01

    Full Text Available Structural damage identification is to determine the structure health status and analyze the test results. The three key problems to be solved are as follows: the existence of damage in structure, to detect the damage location, and to confirm the damage degree or damage form. Damage generally changes the structure physical properties (i.e., stiffness, mass, and damping corresponding with the modal characteristics of the structure (i.e., natural frequencies, modal shapes, and modal damping. The research results show that strain mode can be more sensitive and effective for local damage. The direct index method of damage location detection is based on difference theory, without the modal parameter of the original structure. FEM numerical simulation to partial crack with different degree is done. The criteria of damage location detection can be obtained by strain mode difference curve through cubic spline interpolation. Also the genetic algorithm box in Matlab is used. It has been possible to identify the damage to a reasonable level of accuracy.

  11. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  12. Wilshire fault: Earthquakes in Hollywood?

    Science.gov (United States)

    Hummon, Cheryl; Schneider, Craig L.; Yeats, Robert S.; Dolan, James F.; Sieh, Kerry E.; Huftile, Gary J.

    1994-04-01

    The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

  13. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  14. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  15. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  16. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  17. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  18. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  19. Fault-ignorant quantum search

    International Nuclear Information System (INIS)

    Vrana, Péter; Reeb, David; Reitzner, Daniel; Wolf, Michael M

    2014-01-01

    We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm. (paper)

  20. Fault Detection for Shipboard Monitoring and Decision Support Systems

    DEFF Research Database (Denmark)

    Lajic, Zoran; Nielsen, Ulrik Dam

    2009-01-01

    In this paper a basic idea of a fault-tolerant monitoring and decision support system will be explained. Fault detection is an important part of the fault-tolerant design for in-service monitoring and decision support systems for ships. In the paper, a virtual example of fault detection...... will be presented for a containership with a real decision support system onboard. All possible faults can be simulated and detected using residuals and the generalized likelihood ratio (GLR) algorithm....

  1. Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System

    Science.gov (United States)

    Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.

    2006-01-01

    The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

  2. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi

    2017-12-14

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  3. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2017-01-01

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  4. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    Science.gov (United States)

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  5. Staged-Fault Testing of Distance Protection Relay Settings

    Science.gov (United States)

    Havelka, J.; Malarić, R.; Frlan, K.

    2012-01-01

    In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.

  6. Fault Diagnosis of Power Systems Using Intelligent Systems

    Science.gov (United States)

    Momoh, James A.; Oliver, Walter E. , Jr.

    1996-01-01

    The power system operator's need for a reliable power delivery system calls for a real-time or near-real-time Al-based fault diagnosis tool. Such a tool will allow NASA ground controllers to re-establish a normal or near-normal degraded operating state of the EPS (a DC power system) for Space Station Alpha by isolating the faulted branches and loads of the system. And after isolation, re-energizing those branches and loads that have been found not to have any faults in them. A proposed solution involves using the Fault Diagnosis Intelligent System (FDIS) to perform near-real time fault diagnosis of Alpha's EPS by downloading power transient telemetry at fault-time from onboard data loggers. The FDIS uses an ANN clustering algorithm augmented with a wavelet transform feature extractor. This combination enables this system to perform pattern recognition of the power transient signatures to diagnose the fault type and its location down to the orbital replaceable unit. FDIS has been tested using a simulation of the LeRC Testbed Space Station Freedom configuration including the topology from the DDCU's to the electrical loads attached to the TPDU's. FDIS will work in conjunction with the Power Management Load Scheduler to determine what the state of the system was at the time of the fault condition. This information is used to activate the appropriate diagnostic section, and to refine if necessary the solution obtained. In the latter case, if the FDIS reports back that it is equally likely that the faulty device as 'start tracker #1' and 'time generation unit,' then based on a priori knowledge of the system's state, the refined solution would be 'star tracker #1' located in cabinet ITAS2. It is concluded from the present studies that artificial intelligence diagnostic abilities are improved with the addition of the wavelet transform, and that when such a system such as FDIS is coupled to the Power Management Load Scheduler, a faulty device can be located and isolated

  7. Fault Detection and Isolation for Spacecraft

    DEFF Research Database (Denmark)

    Jensen, Hans-Christian Becker; Wisniewski, Rafal

    2002-01-01

    This article realizes nonlinear Fault Detection and Isolation for actuators, given there is no measurement of the states in the actuators. The Fault Detection and Isolation of the actuators is instead based on angular velocity measurement of the spacecraft and knowledge about the dynamics...... of the satellite. The algorithms presented in this paper are based on a geometric approach to achieve nonlinear Fault Detection and Isolation. The proposed algorithms are tested in a simulation study and the pros and cons of the algorithms are discussed....

  8. Recording real case data of earth faults in distribution lines

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S. [VTT Energy, Espoo (Finland)

    1996-12-31

    The most common fault type in the electrical distribution networks is the single phase to earth fault. According to the earlier studies, for instance in Nordic countries, about 80 % of all faults are of this type. To develop the protection and fault location systems, it is important to obtain real case data of disturbances and faults which occur in the networks. For example, the earth fault initial transients can be used for earth fault location. The aim of this project was to collect and analyze real case data of the earth fault disturbances in the medium voltage distribution networks (20 kV). Therefore, data of fault occurrences were recorded at two substations, of which one has an unearthed and the other a compensated neutral, measured as follows: (a) the phase currents and neutral current for each line in the case of low fault resistance (b) the phase voltages and neutral voltage from the voltage measuring bay in the case of low fault resistance (c) the neutral voltage and the components of 50 Hz at the substation in the case of high fault resistance. In addition, the basic data of the fault occurrences were collected (data of the line, fault location, cause and so on). The data will be used in the development work of fault location and earth fault protection systems

  9. Recording real case data of earth faults in distribution lines

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S [VTT Energy, Espoo (Finland)

    1997-12-31

    The most common fault type in the electrical distribution networks is the single phase to earth fault. According to the earlier studies, for instance in Nordic countries, about 80 % of all faults are of this type. To develop the protection and fault location systems, it is important to obtain real case data of disturbances and faults which occur in the networks. For example, the earth fault initial transients can be used for earth fault location. The aim of this project was to collect and analyze real case data of the earth fault disturbances in the medium voltage distribution networks (20 kV). Therefore, data of fault occurrences were recorded at two substations, of which one has an unearthed and the other a compensated neutral, measured as follows: (a) the phase currents and neutral current for each line in the case of low fault resistance (b) the phase voltages and neutral voltage from the voltage measuring bay in the case of low fault resistance (c) the neutral voltage and the components of 50 Hz at the substation in the case of high fault resistance. In addition, the basic data of the fault occurrences were collected (data of the line, fault location, cause and so on). The data will be used in the development work of fault location and earth fault protection systems

  10. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  11. Detecting Faults in Southern California using Computer-Vision Techniques and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Interferometry

    Science.gov (United States)

    Barba, M.; Rains, C.; von Dassow, W.; Parker, J. W.; Glasscoe, M. T.

    2013-12-01

    Knowing the location and behavior of active faults is essential for earthquake hazard assessment and disaster response. In Interferometric Synthetic Aperture Radar (InSAR) images, faults are revealed as linear discontinuities. Currently, interferograms are manually inspected to locate faults. During the summer of 2013, the NASA-JPL DEVELOP California Disasters team contributed to the development of a method to expedite fault detection in California using remote-sensing technology. The team utilized InSAR images created from polarimetric L-band data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) project. A computer-vision technique known as 'edge-detection' was used to automate the fault-identification process. We tested and refined an edge-detection algorithm under development through NASA's Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) project. To optimize the algorithm we used both UAVSAR interferograms and synthetic interferograms generated through Disloc, a web-based modeling program available through NASA's QuakeSim project. The edge-detection algorithm detected seismic, aseismic, and co-seismic slip along faults that were identified and compared with databases of known fault systems. Our optimization process was the first step toward integration of the edge-detection code into E-DECIDER to provide decision support for earthquake preparation and disaster management. E-DECIDER partners that will use the edge-detection code include the California Earthquake Clearinghouse and the US Department of Homeland Security through delivery of products using the Unified Incident Command and Decision Support (UICDS) service. Through these partnerships, researchers, earthquake disaster response teams, and policy-makers will be able to use this new methodology to examine the details of ground and fault motions for moderate to large earthquakes. Following an earthquake, the newly discovered faults can

  12. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  13. Fault prediction for nonlinear stochastic system with incipient faults based on particle filter and nonlinear regression.

    Science.gov (United States)

    Ding, Bo; Fang, Huajing

    2017-05-01

    This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Locating structures and evolution pathways of reconstructed rutile TiO2(011) using genetic algorithm aided density functional theory calculations.

    Science.gov (United States)

    Ding, Pan; Gong, Xue-Qing

    2016-05-01

    Titanium dioxide (TiO2) is an important metal oxide that has been used in many different applications. TiO2 has also been widely employed as a model system to study basic processes and reactions in surface chemistry and heterogeneous catalysis. In this work, we investigated the (011) surface of rutile TiO2 by focusing on its reconstruction. Density functional theory calculations aided by a genetic algorithm based optimization scheme were performed to extensively sample the potential energy surfaces of reconstructed rutile TiO2 structures that obey (2 × 1) periodicity. A lot of stable surface configurations were located, including the global-minimum configuration that was proposed previously. The wide variety of surface structures determined through the calculations performed in this work provide insight into the relationship between the atomic configuration of a surface and its stability. More importantly, several analytical schemes were proposed and tested to gauge the differences and similarities among various surface structures, aiding the construction of the complete pathway for the reconstruction process.

  15. Locating and decoding barcodes in fuzzy images captured by smart phones

    Science.gov (United States)

    Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.

  16. Constraining fault interpretation through tomographic velocity gradients: application to northern Cascadia

    Directory of Open Access Journals (Sweden)

    K. Ramachandran

    2012-02-01

    Full Text Available Spatial gradients of tomographic velocities are seldom used in interpretation of subsurface fault structures. This study shows that spatial velocity gradients can be used effectively in identifying subsurface discontinuities in the horizontal and vertical directions. Three-dimensional velocity models constructed through tomographic inversion of active source and/or earthquake traveltime data are generally built from an initial 1-D velocity model that varies only with depth. Regularized tomographic inversion algorithms impose constraints on the roughness of the model that help to stabilize the inversion process. Final velocity models obtained from regularized tomographic inversions have smooth three-dimensional structures that are required by the data. Final velocity models are usually analyzed and interpreted either as a perturbation velocity model or as an absolute velocity model. Compared to perturbation velocity model, absolute velocity models have an advantage of providing constraints on lithology. Both velocity models lack the ability to provide sharp constraints on subsurface faults. An interpretational approach utilizing spatial velocity gradients applied to northern Cascadia shows that subsurface faults that are not clearly interpretable from velocity model plots can be identified by sharp contrasts in velocity gradient plots. This interpretation resulted in inferring the locations of the Tacoma, Seattle, Southern Whidbey Island, and Darrington Devil's Mountain faults much more clearly. The Coast Range Boundary fault, previously hypothesized on the basis of sedimentological and tectonic observations, is inferred clearly from the gradient plots. Many of the fault locations imaged from gradient data correlate with earthquake hypocenters, indicating their seismogenic nature.

  17. Real-Time Fault Tolerant Networking Protocols

    National Research Council Canada - National Science Library

    Henzinger, Thomas A

    2004-01-01

    We made significant progress in the areas of video streaming, wireless protocols, mobile ad-hoc and sensor networks, peer-to-peer systems, fault tolerant algorithms, dependability and timing analysis...

  18. Evidence of a tectonic transient within the Idrija fault system in Western Slovenia

    Science.gov (United States)

    Vičič, Blaž; Costa, Giovanni; Aoudia, Abdelkrim

    2017-04-01

    Western Slovenia and North-eastern Italy are areas of medium rate seismicity with rare historic earthquakes of higher magnitudes. From mainly reverse component faulting in north-western part of the region where 1976 Friuli earthquakes took place, tectonic regime changes to mostly strike-slip faulting in the Dinaric region, continuing towards southeast. In the northern part of the Idrija fault system, which represent the broader Dinaric strike-slip system there were two strong earthquakes in the recent times - Mw=5.6 1998 and Mw=5.2 2004 earthquakes. Further to the south, along the Idrija fault system, Idrija fault is the causative fault of 1511 Mw=6.8 earthquake. The southeastern most part of the Idrija fault system produced a Mw=5.2 earthquake in 1926 and few historic Mw>4 earthquakes. Since 2004 Mw=5.2 earthquake, no stronger earthquakes were recorded in the region covered by dense seismic network. Seismicity is mostly concentrated in Friuli region and north-western part of Idrija fault system - mostly on the Ravne fault which is the causative fault for the 1998 and 2004 earthquakes. In the central part of the fault system no strong or moderate earthquakes were recorded, except of an earthquake along the Idrija fault in 2014 of magnitude 3.4. Low magnitude background seismicity is burst like with no apparent temporal or spatial distribution. Seismicity of the southern part of Idrija fault system is again a bit higher than in the central part of the fault system with earthquakes up to Mw=4.4 that happened in 2014. In this study, detailed analysis of the seismicity is performed with manual relocation of the seismicity in the period between 2006 and 2016. With manual inspection of the waveform data, slight temporal clustering of seismicity is observed. We use a template algorithm method to increase the detection rate of the seismicity. Templates of seismicity in the north-western and south-eastern part of Idrija fault system are created. The continuous waveform data

  19. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    Science.gov (United States)

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  20. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

    International Nuclear Information System (INIS)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C.

    2004-01-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis

  1. Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan.

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, Bill Walter; Roberts, Barry L.; McKenna, Sean Andrew; Coburn, Timothy C. (Abilene Christian University, Abilene, TX)

    2004-09-01

    Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

  2. Active fault traces along Bhuj Fault and Katrol Hill Fault, and ...

    Indian Academy of Sciences (India)

    face, passing through the alluvial-colluvial fan at location 2. The gentle warping of the surface was completely modified because of severe cultivation practice. Therefore, it was difficult to confirm it in field. To the south ... scarp has been modified by present day farming. At location 5 near Wandhay village, an active fault trace ...

  3. Model based Fault Detection and Isolation for Driving Motors of a Ground Vehicle

    Directory of Open Access Journals (Sweden)

    Young-Joon Kim

    2016-04-01

    Full Text Available This paper proposes model based current sensor and position sensor fault detection and isolation algorithm for driving motor of In-wheel independent drive electric vehicle. From low level perspective, fault diagnosis conducted and analyzed to enhance robustness and stability. Composing state equation of interior permanent magnet synchronous motor (IPMSM, current sensor fault and position sensor fault diagnosed with parity equation. Validation and usefulness of algorithm confirmed based on IPMSM fault occurrence simulation data.

  4. What does fault tolerant Deep Learning need from MPI?

    Energy Technology Data Exchange (ETDEWEB)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.; Daily, Jeffrey A.

    2017-09-25

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for a fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.

  5. Nonlinear observer based fault detection and isolation for a momentum wheel

    DEFF Research Database (Denmark)

    Jensen, Hans-Christian Becker; Wisniewski, Rafal

    2001-01-01

    This article realizes nonlinear Fault Detection and Isolation for a momentum wheel. The Fault Detection and Isolation is based on a Failure Mode and Effect Analysis, which states which faults might occur and can be detected. The algorithms presented in this paper are based on a geometric approach...... toachieve nonlinear Fault Detection and Isolation. The proposed algorithms are tested in a simulation study and the pros and cons of the algorithm are discussed....

  6. Rupture preparation process controlled by surface roughness on meter-scale laboratory fault

    Science.gov (United States)

    Yamashita, Futoshi; Fukuyama, Eiichi; Xu, Shiqing; Mizoguchi, Kazuo; Kawakata, Hironori; Takizawa, Shigeru

    2018-05-01

    We investigate the effect of fault surface roughness on rupture preparation characteristics using meter-scale metagabbro specimens. We repeatedly conducted the experiments with the same pair of rock specimens to make the fault surface rough. We obtained three experimental results under the same experimental conditions (6.7 MPa of normal stress and 0.01 mm/s of loading rate) but at different roughness conditions (smooth, moderately roughened, and heavily roughened). During each experiment, we observed many stick-slip events preceded by precursory slow slip. We investigated when and where slow slip initiated by using the strain gauge data processed by the Kalman filter algorithm. The observed rupture preparation processes on the smooth fault (i.e. the first experiment among the three) showed high repeatability of the spatiotemporal distributions of slow slip initiation. Local stress measurements revealed that slow slip initiated around the region where the ratio of shear to normal stress (τ/σ) was the highest as expected from finite element method (FEM) modeling. However, the exact location of slow slip initiation was where τ/σ became locally minimum, probably due to the frictional heterogeneity. In the experiment on the moderately roughened fault, some irregular events were observed, though the basic characteristics of other regular events were similar to those on the smooth fault. Local stress data revealed that the spatiotemporal characteristics of slow slip initiation and the resulting τ/σ drop for irregular events were different from those for regular ones even under similar stress conditions. On the heavily roughened fault, the location of slow slip initiation was not consistent with τ/σ anymore because of the highly heterogeneous static friction on the fault, which also decreased the repeatability of spatiotemporal distributions of slow slip initiation. These results suggest that fault surface roughness strongly controls the rupture preparation process

  7. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    Science.gov (United States)

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Subaru FATS (fault tracking system)

    Science.gov (United States)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  9. A Novel Data Hierarchical Fusion Method for Gas Turbine Engine Performance Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Feng Lu

    2016-10-01

    Full Text Available Gas path fault diagnosis involves the effective utilization of condition-based sensor signals along engine gas path to accurately identify engine performance failure. The rapid development of information processing technology has led to the use of multiple-source information fusion for fault diagnostics. Numerous efforts have been paid to develop data-based fusion methods, such as neural networks fusion, while little research has focused on fusion architecture or the fusion of different method kinds. In this paper, a data hierarchical fusion using improved weighted Dempster–Shaffer evidence theory (WDS is proposed, and the integration of data-based and model-based methods is presented for engine gas-path fault diagnosis. For the purpose of simplifying learning machine typology, a recursive reduced kernel based extreme learning machine (RR-KELM is developed to produce the fault probability, which is considered as the data-based evidence. Meanwhile, the model-based evidence is achieved using particle filter-fuzzy logic algorithm (PF-FL by engine health estimation and component fault location in feature level. The outputs of two evidences are integrated using WDS evidence theory in decision level to reach a final recognition decision of gas-path fault pattern. The characteristics and advantages of two evidences are analyzed and used as guidelines for data hierarchical fusion framework. Our goal is that the proposed methodology provides much better performance of gas-path fault diagnosis compared to solely relying on data-based or model-based method. The hierarchical fusion framework is evaluated in terms to fault diagnosis accuracy and robustness through a case study involving fault mode dataset of a turbofan engine that is generated by the general gas turbine simulation. These applications confirm the effectiveness and usefulness of the proposed approach.

  10. Rectifier Fault Diagnosis and Fault Tolerance of a Doubly Fed Brushless Starter Generator

    Directory of Open Access Journals (Sweden)

    Liwei Shi

    2015-01-01

    Full Text Available This paper presents a rectifier fault diagnosis method with wavelet packet analysis to improve the fault tolerant four-phase doubly fed brushless starter generator (DFBLSG system reliability. The system components and fault tolerant principle of the high reliable DFBLSG are given. And the common fault of the rectifier is analyzed. The process of wavelet packet transforms fault detection/identification algorithm is introduced in detail. The fault tolerant performance and output voltage experiments were done to gather the energy characteristics with a voltage sensor. The signal is analyzed with 5-layer wavelet packets, and the energy eigenvalue of each frequency band is obtained. Meanwhile, the energy-eigenvalue tolerance was introduced to improve the diagnostic accuracy. With the wavelet packet fault diagnosis, the fault tolerant four-phase DFBLSG can detect the usual open-circuit fault and operate in the fault tolerant mode if there is a fault. The results indicate that the fault analysis techniques in this paper are accurate and effective.

  11. Integrated seismic interpretation of the Carlsberg Fault zone, Copenhagen, Denmark

    DEFF Research Database (Denmark)

    Nielsen, Lars; Thybo, Hans; Jørgensen, Mette Iwanouw

    2005-01-01

    the fault zone. The fault zone is a shadow zone to shots detonated outside the fault zone. Finite-difference wavefield modelling supports the interpretations of the fan recordings. Our fan recording approach facilitates cost-efficient mapping of fault zones in densely urbanized areas where seismic normal......We locate the concealed Carlsberg Fault zone along a 12-km-long trace in the Copenhagen city centre by seismic refraction, reflection and fan profiling. The Carlsberg Fault is located in a NNW-SSE striking fault system in the border zone between the Danish Basin and the Baltic Shield. Recent...... earthquakes indicate that this area is tectonically active. A seismic refraction study across the Carlsberg Fault shows that the fault zone is a low-velocity zone and marks a change in seismic velocity structure. A normal incidence reflection seismic section shows a coincident flower-like structure. We have...

  12. Monitoring microearthquakes with the San Andreas fault observatory at depth

    Science.gov (United States)

    Oye, V.; Ellsworth, W.L.

    2007-01-01

    In 2005, the San Andreas Fault Observatory at Depth (SAFOD) was drilled through the San Andreas Fault zone at a depth of about 3.1 km. The borehole has subsequently been instrumented with high-frequency geophones in order to better constrain locations and source processes of nearby microearthquakes that will be targeted in the upcoming phase of SAFOD. The microseismic monitoring software MIMO, developed by NORSAR, has been installed at SAFOD to provide near-real time locations and magnitude estimates using the high sampling rate (4000 Hz) waveform data. To improve the detection and location accuracy, we incorporate data from the nearby, shallow borehole (???250 m) seismometers of the High Resolution Seismic Network (HRSN). The event association algorithm of the MIMO software incorporates HRSN detections provided by the USGS real time earthworm software. The concept of the new event association is based on the generalized beam forming, primarily used in array seismology. The method requires the pre-computation of theoretical travel times in a 3D grid of potential microearthquake locations to the seismometers of the current station network. By minimizing the differences between theoretical and observed detection times an event is associated and the location accuracy is significantly improved.

  13. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    Science.gov (United States)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  14. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  15. Searching for Seismically Active Faults in the Gulf of Cadiz

    Science.gov (United States)

    Custodio, S.; Antunes, V.; Arroucau, P.

    2015-12-01

    The repeated occurrence of large magnitude earthquakes in southwest Iberia in historical and instrumental times suggests the presence of active fault segments in the region. However, due to an apparently diffuse seismicity pattern defining a broad region of distributed deformation west of Gibraltar Strait, the question of the location, dimension and geometry of such structures is still open to debate. We recently developed a new algorithm for earthquake location in 3D complex media with laterally varying interface depths, which allowed us to relocate 2363 events having occurred from 2007 to 2013, using P- and S-wave catalog arrival times obtained from the Portuguese Meteorological Institute (IPMA, Instituto Portugues do Mar e da Atmosfera), for a study area lying between 8.5˚W and 5˚W in longitude and 36˚ and 37.5˚ in latitude. The most remarkable change in the seismicity pattern after relocation is an apparent concentration of events, in the North of the Gulf of Cadiz, along a low angle northward-dipping plane rooted at the base of the crust, which could indicate the presence of a major fault. If confirmed, this would be the first structure clearly illuminated by seismicity in a region that has unleashed large magnitude earthquakes. Here, we present results from the joint analysis of focal mechanism solutions and waveform similarity between neighboring events from waveform cross-correlation in order to assess whether those earthquakes occur on the same fault plane.

  16. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  17. Soil radon levels across the Amer fault

    International Nuclear Information System (INIS)

    Font, Ll.; Baixeras, C.; Moreno, V.; Bach, J.

    2008-01-01

    Soil radon levels have been measured across the Amer fault, which is located near the volcanic region of La Garrotxa, Spain. Both passive (LR-115, time-integrating) and active (Clipperton II, time-resolved) detectors have been used in a survey in which 27 measurement points were selected in five lines perpendicular to the Amer fault in the village area of Amer. The averaged results show an influence of the distance to the fault on the mean soil radon values. The dynamic results show a very clear seasonal effect on the soil radon levels. The results obtained support the hypothesis that the fault is still active

  18. Integrated fault tree development environment

    International Nuclear Information System (INIS)

    Dixon, B.W.

    1986-01-01

    Probabilistic Risk Assessment (PRA) techniques are utilized in the nuclear industry to perform safety analyses of complex defense-in-depth systems. A major effort in PRA development is fault tree construction. The Integrated Fault Tree Environment (IFTREE) is an interactive, graphics-based tool for fault tree design. IFTREE provides integrated building, editing, and analysis features on a personal workstation. The design philosophy of IFTREE is presented, and the interface is described. IFTREE utilizes a unique rule-based solution algorithm founded in artificial intelligence (AI) techniques. The impact of the AI approach on the program design is stressed. IFTREE has been developed to handle the design and maintenance of full-size living PRAs and is currently in use

  19. Advanced features of the fault tree solver FTREX

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jae Joo

    2005-01-01

    This paper presents advanced features of a fault tree solver FTREX (Fault Tree Reliability Evaluation eXpert). Fault tree analysis is one of the most commonly used methods for the safety analysis of industrial systems especially for the probabilistic safety analysis (PSA) of nuclear power plants. Fault trees are solved by the classical Boolean algebra, conventional Binary Decision Diagram (BDD) algorithm, coherent BDD algorithm, and Bayesian networks. FTREX could optionally solve fault trees by the conventional BDD algorithm or the coherent BDD algorithm and could convert the fault trees into the form of the Bayesian networks. The algorithm based on the classical Boolean algebra solves a fault tree and generates MCSs. The conventional BDD algorithm generates a BDD structure of the top event and calculates the exact top event probability. The BDD structure is a factorized form of the prime implicants. The MCSs of the top event could be extracted by reducing the prime implicants in the BDD structure. The coherent BDD algorithm is developed to overcome the shortcomings of the conventional BDD algorithm such as the huge memory requirements and a long run time

  20. Research on vibration signal analysis and extraction method of gear local fault

    Science.gov (United States)

    Yang, X. F.; Wang, D.; Ma, J. F.; Shao, W.

    2018-02-01

    Gear is the main connection parts and power transmission parts in the mechanical equipment. If the fault occurs, it directly affects the running state of the whole machine and even endangers the personal safety. So it has important theoretical significance and practical value to study on the extraction of the gear fault signal and fault diagnosis of the gear. In this paper, the gear local fault as the research object, set up the vibration model of gear fault vibration mechanism, derive the vibration mechanism of the gear local fault and analyzes the similarities and differences of the vibration signal between the gear non fault and the gears local faults. In the MATLAB environment, the wavelet transform algorithm is used to denoise the fault signal. Hilbert transform is used to demodulate the fault vibration signal. The results show that the method can denoise the strong noise mechanical vibration signal and extract the local fault feature information from the fault vibration signal..

  1. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    Science.gov (United States)

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  2. Data-driven design of fault diagnosis and fault-tolerant control systems

    CERN Document Server

    Ding, Steven X

    2014-01-01

    Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems presents basic statistical process monitoring, fault diagnosis, and control methods, and introduces advanced data-driven schemes for the design of fault diagnosis and fault-tolerant control systems catering to the needs of dynamic industrial processes. With ever increasing demands for reliability, availability and safety in technical processes and assets, process monitoring and fault-tolerance have become important issues surrounding the design of automatic control systems. This text shows the reader how, thanks to the rapid development of information technology, key techniques of data-driven and statistical process monitoring and control can now become widely used in industrial practice to address these issues. To allow for self-contained study and facilitate implementation in real applications, important mathematical and control theoretical knowledge and tools are included in this book. Major schemes are presented in algorithm form and...

  3. Fault trees for decision making in systems analysis

    International Nuclear Information System (INIS)

    Lambert, H.E.

    1975-01-01

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut sets according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure. (80 figures, 20 tables)

  4. Fault Detection for Automotive Shock Absorber

    Science.gov (United States)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  5. Solving fault diagnosis problems linear synthesis techniques

    CERN Document Server

    Varga, Andreas

    2017-01-01

    This book addresses fault detection and isolation topics from a computational perspective. Unlike most existing literature, it bridges the gap between the existing well-developed theoretical results and the realm of reliable computational synthesis procedures. The model-based approach to fault detection and diagnosis has been the subject of ongoing research for the past few decades. While the theoretical aspects of fault diagnosis on the basis of linear models are well understood, most of the computational methods proposed for the synthesis of fault detection and isolation filters are not satisfactory from a numerical standpoint. Several features make this book unique in the fault detection literature: Solution of standard synthesis problems in the most general setting, for both continuous- and discrete-time systems, regardless of whether they are proper or not; consequently, the proposed synthesis procedures can solve a specific problem whenever a solution exists Emphasis on the best numerical algorithms to ...

  6. Fault Reconnaissance Agent for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Elhadi M. Shakshuki

    2010-01-01

    Full Text Available One of the key prerequisite for a scalable, effective and efficient sensor network is the utilization of low-cost, low-overhead and high-resilient fault-inference techniques. To this end, we propose an intelligent agent system with a problem solving capability to address the issue of fault inference in sensor network environments. The intelligent agent system is designed and implemented at base-station side. The core of the agent system – problem solver – implements a fault-detection inference engine which harnesses Expectation Maximization (EM algorithm to estimate fault probabilities of sensor nodes. To validate the correctness and effectiveness of the intelligent agent system, a set of experiments in a wireless sensor testbed are conducted. The experimental results show that our intelligent agent system is able to precisely estimate the fault probability of sensor nodes.

  7. Supposed capable fault analysis as supporting data for Nuclear Power Plant in Bojonegara, Banten province

    International Nuclear Information System (INIS)

    Purnomo Raharjo; June Mellawati; Yarianto SBS

    2016-01-01

    Fault location and the regions radius 150 km of a fault line or fault zones was rejected area or at the Nuclear Power Plant site. The objective of this study was to identify the existence of surface fault or supposed capable fault at 150 km from the interest site. Methodology covers interpretation of fault structure, seismic analysis reflection on land and sea, seismotectonic analysis, and determining areas which are free from the surface fault. The regional study area, which has the radius of 150 kilometers from the interest, includes the province of Banten, Jakarta, West Java, And South Sumatra (some part of Lampung). The results of Landsat image interpretation showed fault structure pattern northeast-southwest which represent Cimandiri fault, northwest-southeast represent Citandui fault, Baribis fault, Tangkuban Perahu fault. The northeast - southwest fault is estimated as left lateral faults, and northwest - southeast fault trending is estimated as right lateral faults. Based on the seismic data on land, the fault that rise through to Cisubuh formation are classified as supposed capable fault. Data of seismic stratigraphy sequence analysis at the sea correlated with a unit of the age deposition in the Pleistocene, where divided into Qt (Tertiary boundary and Early Pleistocene), Q1 (Early Pleistocene boundary and Middle Pleistocene) and Q2 (Midle Pleistocene boundary and Late Pleistocene), supposed capable fault pierce early to late Pleistocene sequence. The results of the seismotectonic analysis showed that there are capable fault which is estimated as supposed capable fault. (author)

  8. Integration of Fault Detection and Isolation with Control Using Neuro-fuzzy Scheme

    Directory of Open Access Journals (Sweden)

    A. Asokan

    2009-10-01

    Full Text Available In this paper an algorithms is developed for fault diagnosis and fault tolerant control strategy for nonlinear systems subjected to an unknown time-varying fault. At first, the design of fault diagnosis scheme is performed using model based fault detection technique. The neuro-fuzzy chi-square scheme is applied for fault detection and isolation. The fault magnitude and time of occurrence of fault is obtained through neuro-fuzzy chi-square scheme. The estimated magnitude of the fault magnitude is normalized and used by the feed-forward control algorithm to make appropriate changes in the manipulated variable to keep the controlled variable near its set value. The feed-forward controller acts along with feed-back controller to control the multivariable system. The performance of the proposed scheme is applied to a three- tank process for various types of fault inputs to show the effectiveness of the proposed approach.

  9. A survey of fault diagnosis technology

    Science.gov (United States)

    Riedesel, Joel

    1989-01-01

    Existing techniques and methodologies for fault diagnosis are surveyed. The techniques run the gamut from theoretical artificial intelligence work to conventional software engineering applications. They are shown to define a spectrum of implementation alternatives where tradeoffs determine their position on the spectrum. Various tradeoffs include execution time limitations and memory requirements of the algorithms as well as their effectiveness in addressing the fault diagnosis problem.

  10. Arc fault detection system

    Science.gov (United States)

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  11. Arc fault detection system

    Science.gov (United States)

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  12. Navigation System Fault Diagnosis for Underwater Vehicle

    DEFF Research Database (Denmark)

    Falkenberg, Thomas; Gregersen, Rene Tavs; Blanke, Mogens

    2014-01-01

    This paper demonstrates fault diagnosis on unmanned underwater vehicles (UUV) based on analysis of structure of the nonlinear dynamics. Residuals are generated using dierent approaches in structural analysis followed by statistical change detection. Hypothesis testing thresholds are made signal...... based to cope with non-ideal properties seen in real data. Detection of both sensor and thruster failures are demonstrated. Isolation is performed using the residual signature of detected faults and the change detection algorithm is used to assess severity of faults by estimating their magnitude...

  13. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    Science.gov (United States)

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  14. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT for Aquaculture

    Directory of Open Access Journals (Sweden)

    Yingyi Chen

    2017-01-01

    Full Text Available In the Internet of Things (IoT equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  15. Sequential fault diagnosis for mechatronics system using diagnostic hybrid bond graph and composite harmony search

    Directory of Open Access Journals (Sweden)

    Ming Yu

    2015-12-01

    Full Text Available This article proposes a sequential fault diagnosis method to handle asynchronous distinct faults using diagnostic hybrid bond graph and composite harmony search. The faults under consideration include fault mode, abrupt fault, and intermittent fault. The faults can occur in different time instances, which add to the difficulty of decision making for fault diagnosis. This is because the earlier occurred fault can exhibit fault symptom which masks the fault symptom of latter occurred fault. In order to solve this problem, a sequential identification algorithm is developed in which the identification task is reactivated based on two conditions. The first condition is that the latter occurred fault has at least one inconsistent coherence vector element which is consistent in coherence vector of the earlier occurred fault, and the second condition is that the existing fault coherence vector has the ability to hide other faults and the second-level residual exceeds the threshold. A new composite harmony search which is capable of handling continuous variables and binary variables simultaneously is proposed for identification purpose. Experiments on a mobile robot system are conducted to assess the proposed sequential fault diagnosis algorithm.

  16. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

  17. Dynamics Modeling and Analysis of Local Fault of Rolling Element Bearing

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2015-01-01

    Full Text Available This paper presents a nonlinear vibration model of rolling element bearings with 5 degrees of freedom based on Hertz contact theory and relevant bearing knowledge of kinematics and dynamics. The slipping of ball, oil film stiffness, and the nonlinear time-varying stiffness of the bearing are taken into consideration in the model proposed here. The single-point local fault model of rolling element bearing is introduced into the nonlinear model with 5 degrees of freedom according to the loss of the contact deformation of ball when it rolls into and out of the local fault location. The functions of spall depth corresponding to defects of different shapes are discussed separately in this paper. Then the ode solver in Matlab is adopted to perform a numerical solution on the nonlinear vibration model to simulate the vibration response of the rolling elements bearings with local fault. The simulation signals analysis results show a similar behavior and pattern to that observed in the processed experimental signals of rolling element bearings in both time domain and frequency domain which validated the nonlinear vibration model proposed here to generate typical rolling element bearings local fault signals for possible and effective fault diagnostic algorithms research.

  18. Multi-link faults localization and restoration based on fuzzy fault set for dynamic optical networks.

    Science.gov (United States)

    Zhao, Yongli; Li, Xin; Li, Huadong; Wang, Xinbo; Zhang, Jie; Huang, Shanguo

    2013-01-28

    Based on a distributed method of bit-error-rate (BER) monitoring, a novel multi-link faults restoration algorithm is proposed for dynamic optical networks. The concept of fuzzy fault set (FFS) is first introduced for multi-link faults localization, which includes all possible optical equipment or fiber links with a membership describing the possibility of faults. Such a set is characterized by a membership function which assigns each object a grade of membership ranging from zero to one. OSPF protocol extension is designed for the BER information flooding in the network. The BER information can be correlated to link faults through FFS. Based on the BER information and FFS, multi-link faults localization mechanism and restoration algorithm are implemented and experimentally demonstrated on a GMPLS enabled optical network testbed with 40 wavelengths in each fiber link. Experimental results show that the novel localization mechanism has better performance compared with the extended limited perimeter vector matching (LVM) protocol and the restoration algorithm can improve the restoration success rate under multi-link faults scenario.

  19. Tectonic tremor and LFEs on a reverse fault in Taiwan

    Science.gov (United States)

    Aguiar, Ana C.; Chao, Kevin; Beroza, Gregory C.

    2017-07-01

    We compare low-frequency earthquakes (LFEs) from triggered and ambient tremor under the southern Central Range, Taiwan. We apply the PageRank algorithm used by Aguiar and Beroza (2014) that exploits the repetitive nature of the LFEs to find repeating LFEs in both ambient and triggered tremor. We use these repeaters to create LFE templates and find that the templates created from both tremor types are very similar. To test their similarity, we use both interchangeably and find that most of both the ambient and triggered tremor match the LFE templates created from either data set, suggesting that LFEs for both events have a common origin. We locate the LFEs by using local earthquake P wave and S wave information and find that LFEs from triggered and ambient tremor locate to between 20 and 35 km on what we interpret as the deep extension of the Chaochou-Lishan Fault.

  20. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  1. Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.

    Science.gov (United States)

    Wang, Tao; Xie, Wenfang; Zhang, Youmin

    2012-05-01

    In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  2. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    Full Text Available Recent deformation processes taking place in real time are analyzed on the basis of data on fault zones which were collected by long-term detailed geodetic survey studies with application of field methods and satellite monitoring.A new category of recent crustal movements is described and termed as parametrically induced tectonic strain in fault zones. It is shown that in the fault zones located in seismically active and aseismic regions, super intensive displacements of the crust (5 to 7 cm per year, i.e. (5 to 7·10–5 per year occur due to very small external impacts of natural or technogenic / industrial origin.The spatial discreteness of anomalous deformation processes is established along the strike of the regional Rechitsky fault in the Pripyat basin. It is concluded that recent anomalous activity of the fault zones needs to be taken into account in defining regional regularities of geodynamic processes on the basis of real-time measurements.The paper presents results of analyses of data collected by long-term (20 to 50 years geodetic surveys in highly seismically active regions of Kopetdag, Kamchatka and California. It is evidenced by instrumental geodetic measurements of recent vertical and horizontal displacements in fault zones that deformations are ‘paradoxically’ deviating from the inherited movements of the past geological periods.In terms of the recent geodynamics, the ‘paradoxes’ of high and low strain velocities are related to a reliable empirical fact of the presence of extremely high local velocities of deformations in the fault zones (about 10–5 per year and above, which take place at the background of slow regional deformations which velocities are lower by the order of 2 to 3. Very low average annual velocities of horizontal deformation are recorded in the seismic regions of Kopetdag and Kamchatka and in the San Andreas fault zone; they amount to only 3 to 5 amplitudes of the earth tidal deformations per year.A ‘fault

  3. Stafford fault system: 120 million year fault movement history of northern Virginia

    Science.gov (United States)

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  4. Qademah Fault 3D Survey

    KAUST Repository

    Hanafy, Sherif M.

    2014-01-01

    Objective: Collect 3D seismic data at Qademah Fault location to 1. 3D traveltime tomography 2. 3D surface wave migration 3. 3D phase velocity 4. Possible reflection processing Acquisition Date: 26 – 28 September 2014 Acquisition Team: Sherif, Kai, Mrinal, Bowen, Ahmed Acquisition Layout: We used 288 receiver arranged in 12 parallel lines, each line has 24 receiver. Inline offset is 5 m and crossline offset is 10 m. One shot is fired at each receiver location. We use the 40 kgm weight drop as seismic source, with 8 to 15 stacks at each shot location.

  5. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  6. The cenozoic strike-slip faults and TTHE regional crust stability of Beishan area

    International Nuclear Information System (INIS)

    Guo Zhaojie; Zhang Zhicheng; Zhang Chen; Liu Chang; Zhang Yu; Wang Ju; Chen Weiming

    2008-01-01

    The remote sensing images and geological features of Beishan area indicate that the Altyn Tagh fault, Sanweishan-Shuangta fault, Daquan fault and Hongliuhe fault are distributed in Beishan area from south to north. The faults are all left-lateral strike-slip faults with trending of NE40-50°, displaying similar distribution pattern. The secondary branch faults are developed at the end of each main strike-slip fault with nearly east to west trending form dendritic oblique crossings at the angle of 30-50°. Because of the left-lateral slip of the branch faults, the granites or the blocks exposed within the branch faults rotate clockwisely, forming 'Domino' structures. So the structural style of Beishan area consists of the Altyn Tagh fault, Sanweishan-Shuangta fault, Daquan fault, Hongliuhe fault and their branch faults and rotational structures between different faults. Sedimentary analysis on the fault valleys in the study area and ESR chronological test of fault clay exhibit that the Sanweishan-Shuangta fault form in the late Pliocene (N2), while the Daquan fault displays formation age of l.5-1.2 Ma, and the activity age of the relevant branch faults is Late Pleistocene (400 ka). The ages become younger from the Altyn Tagh fault to the Daquan fault and strike-slip faults display NW trending extension, further revealing the lateral growth process of the strike-slip boundary at the northern margin during the Cenozoic uplift of Tibetan Plateau. The displacement amounts on several secondary faults caused by the activities of the faults are slight due to the above-mentioned structural distribution characteristics of Beishan area, which means that this area is the most stable active area with few seismic activities. We propose the main granitic bodies in Beishan area could be favorable preselected locations for China's high level radioactive waste repository. (authors)

  7. New development in relay protection for smart grid : new principles of fault distinction

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B.; Hao, Z. [Xi' an Jiaotong Univ., Xian (China); Klimek, A. [Powertech Labs Inc., Surrey, BC (Canada); Bo, Z. [Alstom Grid Automation (United Kingdom)

    2010-07-01

    China is planning to integrate a smart grid into a proposed 750/1000 kV transmission network. However, the performance of the protection relay must be assured in order to have this kind of transmitting capacity. There are many protection strategies that address the many demands of a smart grid, including ultra-high-speed transient-based fault discrimination; new co-ordination principles of main and back-up protection to suit the diversification of the power network; optimal co-ordination between relay protection; and autoreclosure to enhance robustness of the power network. There are also new developments in protection early warning and tripping functions in the protection concepts based on wide area information. This paper presented the principles, algorithms and techniques of single-ended, transient-based and ultra-high-speed protection for extra-high voltage (EHV) transmission lines, buses, DC transmission lines and faulted line selection for non-solid earthed networks. Test results have verified that the proposed methods can determine fault characteristics with ultra-high-speed (5 ms), and that the new principles of fault discrimination can satisfy the demand of EHV systems within a smart grid. High speed Digital Signal Processor (DSP) embedded system techniques combined with optical sensors provide the ability to record and compute detailed fault transients. This technology makes it possible to implement protection principles based on transient information. Due to the inconsistent nature of the wave impedance for various power apparatuses and the reflection and refraction characteristics of their interconnection points, the fault transients contain abundant information about the fault location and type. It is possible to construct ultra high speed and more sensitive AC, DC and busbar main protection through the correct analysis of such information. 23 refs., 6 figs.

  8. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  9. Locating one pairwise interaction: Three recursive constructions

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2016-09-01

    Full Text Available In a complex component-based system, choices (levels for components (factors may interact tocause faults in the system behaviour. When faults may be caused by interactions among few factorsat specific levels, covering arrays provide a combinatorial test suite for discovering the presence offaults. While well studied, covering arrays do not enable one to determine the specific levels of factorscausing the faults; locating arrays ensure that the results from test suite execution suffice to determinethe precise levels and factors causing faults, when the number of such causes is small. Constructionsfor locating arrays are at present limited to heuristic computational methods and quite specific directconstructions. In this paper three recursive constructions are developed for locating arrays to locateone pairwise interaction causing a fault.

  10. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  11. FAULT-TOLERANT DESIGN FOR ADVANCED DIVERSE PROTECTION SYSTEM

    Directory of Open Access Journals (Sweden)

    YANG GYUN OH

    2013-11-01

    Full Text Available For the improvement of APR1400 Diverse Protection System (DPS design, the Advanced DPS (ADPS has recently been developed to enhance the fault tolerance capability of the system. Major fault masking features of the ADPS compared with the APR1400 DPS are the changes to the channel configuration and reactor trip actuation equipment. To minimize the fault occurrences within the ADPS, and to mitigate the consequences of common-cause failures (CCF within the safety I&C systems, several fault avoidance design features have been applied in the ADPS. The fault avoidance design features include the changes to the system software classification, communication methods, equipment platform, MMI equipment, etc. In addition, the fault detection, location, containment, and recovery processes have been incorporated in the ADPS design. Therefore, it is expected that the ADPS can provide an enhanced fault tolerance capability against the possible faults within the system and its input/output equipment, and the CCF of safety systems.

  12. Radon anomalies along faults in North of Jordan

    International Nuclear Information System (INIS)

    Al-Tamimi, M.H.; Abumurad, K.M.

    2001-01-01

    Radon emanation was sampled in five locations in a limestone quarry area using SSNTDs CR-39. Radon levels in the soil air at four different well-known traceable fault planes were measured along a traverse line perpendicular to each of these faults. Radon levels at the fault were higher by a factor of 3-10 than away from the faults. However, some sites have broader shoulders than the others. The method was applied along a fifth inferred fault zone. The results show anomalous radon level in the sampled station near the fault zone, which gave a radon value higher by three times than background. This study draws its importance from the fact that in Jordan many cities and villages have been established over an intensive faulted land. Also, our study has considerable implications for the future radon mapping. Moreover, radon gas is proved to be a good tool for fault zones detection

  13. Application of genetic neural network in steam generator fault diagnosing

    International Nuclear Information System (INIS)

    Lin Xiaogong; Jiang Xingwei; Liu Tao; Shi Xiaocheng

    2005-01-01

    In the paper, a new algorithm which neural network and genetic algorithm are mixed is adopted, aiming at the problems of slow convergence rate and easily falling into part minimums in network studying of traditional BP neural network, and used in the fault diagnosis of steam generator. The result shows that this algorithm can solve the convergence problem in the network trains effectively. (author)

  14. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  15. Seismicity and Tectonics of the West Kaibab Fault Zone, AZ

    Science.gov (United States)

    Wilgus, J. T.; Brumbaugh, D. S.

    2014-12-01

    The West Kaibab Fault Zone (WKFZ) is the westernmost bounding structure of the Kaibab Plateau of northern Arizona. The WKFZ is a branching complex of high angle, normal faults downthrown to the west. There are three main faults within the WKFZ, the Big Springs fault with a maximum of 165 m offset, the Muav fault with 350 m of displacement, and the North Road fault having a maximum throw of approximately 90 m. Mapping of geologically recent surface deposits at or crossing the fault contacts indicates that the faults are likely Quaternary with the most recent offsets occurring one of the most seismically active areas in Arizona and lies within the Northern Arizona Seismic Belt (NASB), which stretches across northern Arizona trending NW-SE. The data set for this study includes 156 well documented events with the largest being a M5.75 in 1959 and including a swarm of seven earthquakes in 2012. The seismic data set (1934-2014) reveals that seismic activity clusters in two regions within the study area, the Fredonia cluster located in the NW corner of the study area and the Kaibab cluster located in the south central portion of the study area. The fault plane solutions to date indicate NE-SW to EW extension is occurring in the study area. Source relationships between earthquakes and faults within the WKFZ have not previously been studied in detail. The goal of this study is to use the seismic data set, the available data on faults, and the regional physiography to search for source relationships for the seismicity. Analysis includes source parameters of the earthquake data (location, depth, and fault plane solutions), and comparison of this output to the known faults and areal physiographic framework to indicate any active faults of the WKFZ, or suggested active unmapped faults. This research contributes to a better understanding of the present nature of the WKFZ and the NASB as well.

  16. Determining on-fault magnitude distributions for a connected, multi-fault system

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2017-12-01

    A new method is developed to determine on-fault magnitude distributions within a complex and connected multi-fault system. A binary integer programming (BIP) method is used to distribute earthquakes from a 10 kyr synthetic regional catalog, with a minimum magnitude threshold of 6.0 and Gutenberg-Richter (G-R) parameters (a- and b-values) estimated from historical data. Each earthquake in the synthetic catalog can occur on any fault and at any location. In the multi-fault system, earthquake ruptures are allowed to branch or jump from one fault to another. The objective is to minimize the slip-rate misfit relative to target slip rates for each of the faults in the system. Maximum and minimum slip-rate estimates around the target slip rate are used as explicit constraints. An implicit constraint is that an earthquake can only be located on a fault (or series of connected faults) if it is long enough to contain that earthquake. The method is demonstrated in the San Francisco Bay area, using UCERF3 faults and slip-rates. We also invoke the same assumptions regarding background seismicity, coupling, and fault connectivity as in UCERF3. Using the preferred regional G-R a-value, which may be suppressed by the 1906 earthquake, the BIP problem is deemed infeasible when faults are not connected. Using connected faults, however, a solution is found in which there is a surprising diversity of magnitude distributions among faults. In particular, the optimal magnitude distribution for earthquakes that participate along the Peninsula section of the San Andreas fault indicates a deficit of magnitudes in the M6.0- 7.0 range. For the Rodgers Creek-Hayward fault combination, there is a deficit in the M6.0- 6.6 range. Rather than solving this as an optimization problem, we can set the objective function to zero and solve this as a constraint problem. Among the solutions to the constraint problem is one that admits many more earthquakes in the deficit magnitude ranges for both faults

  17. Model Based Fault Detection in a Centrifugal Pump Application

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh

    2006-01-01

    A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...

  18. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  19. Fault Diagnosis for Actuators in a Class of Nonlinear Systems Based on an Adaptive Fault Detection Observer

    Directory of Open Access Journals (Sweden)

    Runxia Guo

    2016-01-01

    Full Text Available The problem of actuators’ fault diagnosis is pursued for a class of nonlinear control systems that are affected by bounded measurement noise and external disturbances. A novel fault diagnosis algorithm has been proposed by combining the idea of adaptive control theory and the approach of fault detection observer. The asymptotical stability of the fault detection observer is guaranteed by setting the adaptive adjusting law of the unknown fault vector. A theoretically rigorous proof of asymptotical stability has been given. Under the condition that random measurement noise generated by the sensors of control systems and external disturbances exist simultaneously, the designed fault diagnosis algorithm is able to successfully give specific estimated values of state variables and failures rather than just giving a simple fault warning. Moreover, the proposed algorithm is very simple and concise and is easy to be applied to practical engineering. Numerical experiments are carried out to evaluate the performance of the fault diagnosis algorithm. Experimental results show that the proposed diagnostic strategy has a satisfactory estimation effect.

  20. Data Fault Detection in Medical Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2015-03-01

    Full Text Available Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians’ diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren’t changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M. Its mechanism includes: (1 use of a dynamic-local outlier factor (D-LOF algorithm to identify outlying sensed data vectors; (2 use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3 the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M.

  1. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  2. Layered Fault Management Architecture

    National Research Council Canada - National Science Library

    Sztipanovits, Janos

    2004-01-01

    ... UAVs or Organic Air Vehicles. The approach of this effort was to analyze fault management requirements of formation flight for fleets of UAVs, and develop a layered fault management architecture which demonstrates significant...

  3. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    Science.gov (United States)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  4. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  5. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  6. Performance based fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2002-01-01

    Different aspects of fault detection and fault isolation in closed-loop systems are considered. It is shown that using the standard setup known from feedback control, it is possible to formulate fault diagnosis problems based on a performance index in this general standard setup. It is also shown...

  7. Layered clustering multi-fault diagnosis for hydraulic piston pump

    Science.gov (United States)

    Du, Jun; Wang, Shaoping; Zhang, Haiyan

    2013-04-01

    Efficient diagnosis is very important for improving reliability and performance of aircraft hydraulic piston pump, and it is one of the key technologies in prognostic and health management system. In practice, due to harsh working environment and heavy working loads, multiple faults of an aircraft hydraulic pump may occur simultaneously after long time operations. However, most existing diagnosis methods can only distinguish pump faults that occur individually. Therefore, new method needs to be developed to realize effective diagnosis of simultaneous multiple faults on aircraft hydraulic pump. In this paper, a new method based on the layered clustering algorithm is proposed to diagnose multiple faults of an aircraft hydraulic pump that occur simultaneously. The intensive failure mechanism analyses of the five main types of faults are carried out, and based on these analyses the optimal combination and layout of diagnostic sensors is attained. The three layered diagnosis reasoning engine is designed according to the faults' risk priority number and the characteristics of different fault feature extraction methods. The most serious failures are first distinguished with the individual signal processing. To the desultory faults, i.e., swash plate eccentricity and incremental clearance increases between piston and slipper, the clustering diagnosis algorithm based on the statistical average relative power difference (ARPD) is proposed. By effectively enhancing the fault features of these two faults, the ARPDs calculated from vibration signals are employed to complete the hypothesis testing. The ARPDs of the different faults follow different probability distributions. Compared with the classical fast Fourier transform-based spectrum diagnosis method, the experimental results demonstrate that the proposed algorithm can diagnose the multiple faults, which occur synchronously, with higher precision and reliability.

  8. Calculation of critical fault recovery time for nonlinear systems based on region of attraction analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Blanke, Mogens

    2014-01-01

    of a system. It must be guaranteed that the trajectory of a system subject to fault remains in the region of attraction (ROA) of the post-fault system during this time. This paper proposes a new algorithm to compute the critical fault recovery time for nonlinear systems with polynomial vector elds using sum...

  9. Machine Learning of Fault Friction

    Science.gov (United States)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

  10. Fault Isolation and quality assessment for shipboard monitoring

    DEFF Research Database (Denmark)

    Lajic, Zoran; Nielsen, Ulrik Dam; Blanke, Mogens

    2010-01-01

    system and to improve multi-sensor data fusion for the particular system. Fault isolation is an important part of the fault tolerant design for in-service monitoring and decision support systems for ships. In the paper, a virtual example of fault isolation will be presented. Several possible faults...... will be simulated and isolated using residuals and the generalized likelihood ratio (GLR) algorithm. It will be demonstrated that the approach can be used to increase accuracy of sea state estimations employing sensor fusion quality test....

  11. Fault-tolerant clock synchronization in distributed systems

    Science.gov (United States)

    Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.

    1990-01-01

    Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.

  12. Fault Detection/Isolation Verification,

    Science.gov (United States)

    1982-08-01

    63 - A I MCC ’I UNCLASSIFIED SECURITY CLASSIPICATION OP THIS PAGE tMh*f Dal f&mered, REPORT D00CUMENTATION PAGE " .O ORM 1. REPORT NUM.9ft " 2. GOVT...test the performance of th .<ver) DO 2" 1473 EoIoTON OP iNov os i OSoLTe UNCLASSIFIED SECURITY CLASSIPICATION 0 T"IS PAGE (P 3 . at Sted) I...UNCLASSIFIED Acumy, C .AMICATIN Of THIS PAGS. (m ... DO&.m , Algorithm on these netowrks , several different fault scenarios were designed for each network. Each

  13. Active Fault Detection and Isolation for Hybrid Systems

    DEFF Research Database (Denmark)

    Gholami, Mehdi; Schiøler, Henrik; Bak, Thomas

    2009-01-01

    An algorithm for active fault detection and isolation is proposed. In order to observe the failure hidden due to the normal operation of the controllers or the systems, an optimization problem based on minimization of test signal is used. The optimization based method imposes the normal and faulty...... models predicted outputs such that their discrepancies are observable by passive fault diagnosis technique. Isolation of different faults is done by implementation a bank of Extended Kalman Filter (EKF) where the convergence criterion for EKF is confirmed by Genetic Algorithm (GA). The method is applied...

  14. A fault diagnosis prototype for a bioreactor for bioinsecticide production

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1995-01-01

    The objective of this work is to develop an algorithm for fault diagnosis in a process of animal cell cultivation, for bioinsecticide production. Generally, these processes are batch processes. It is a fact that the diagnosis for a batch process involves a division of the process evolution (time horizon) into partial processes, which are defined as pseudocontinuous blocks. Therefore, a PCB represents the evolution of the system in a time interval where it has a qualitative behavior similar to a continuous one. Thus, each PCB, in which the process is divided, can be handled in a conventional way (like continuous processes). The process model, for each PCB, is a Signed Directed Graph (SDG). To achieve generality and to allow the computational implementation, the modular approach was used in the synthesis of the bioreactor digraph. After that, the SDGs were used to carry out qualitative simulations of faults. The achieved results are the fault patterns. A special fault symptom dictionary - SM - has been adopted as data base organization for fault patterns storage. An effective algorithm is presented for the searching process of fault patterns. The system studied, as a particular application, is a bioreactor for cell cultivation for bioinsecticide production. During this work, we concentrate on the SDG construction, and 3btaining real fault patterns by the elimination of spurious patterns. The algorithm has proved to be effective in both senses, resolution and accuracy, to diagnose different kinds of simulated faults

  15. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  16. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chen, Ting [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tan, Sirui [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lin, Youzuo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gao, Kai [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-10

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismic data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.

  17. Identification and location of catenary insulator in complex background based on machine vision

    Science.gov (United States)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  18. Qademah Fault Passive Data

    KAUST Repository

    Hanafy, Sherif M.

    2014-01-01

    OBJECTIVE: In this field trip we collect passive data to 1. Convert passive to surface waves 2. Locate Qademah fault using surface wave migration INTRODUCTION: In this field trip we collected passive data for several days. This data will be used to find the surface waves using interferometry and then compared to active-source seismic data collected at the same location. A total of 288 receivers are used. A 3D layout with 5 m inline intervals and 10 m cross line intervals is used, where we used 12 lines with 24 receivers at each line. You will need to download the file (rec_times.mat), it contains important information about 1. Field record no 2. Record day 3. Record month 4. Record hour 5. Record minute 6. Record second 7. Record length P.S. 1. All files are converted from original format (SEG-2) to matlab format P.S. 2. Overlaps between records (10 to 1.5 sec.) are already removed from these files

  19. Partial discharge location technique for covered-conductor overhead distribution lines

    Energy Technology Data Exchange (ETDEWEB)

    Isa, M.

    2013-02-01

    In Finland, covered-conductor (CC) overhead lines are commonly used in medium voltage (MV) networks because the loads are widely distributed in the forested terrain. Such parts of the network are exposed to leaning trees which produce partial discharges (PDs) in CC lines. This thesis presents a technique to locate the PD source on CC overhead distribution line networks. The algorithm is developed and tested using a simulated study and experimental measurements. The Electromagnetic Transient Program-Alternative Transient Program (EMTP-ATP) is used to simulate and analyze a three-phase PD monitoring system, while MATLAB is used for post-processing of the high frequency signals which were measured. A Rogowski coil is used as the measuring sensor. A multi-end correlation-based technique for PD location is implemented using the theory of maximum correlation factor in order to find the time difference of arrival (TDOA) between signal arrivals at three synchronized measuring points. The three stages of signal analysis used are: (1) denoising by applying discrete wavelet transform (DWT); (2) extracting the PD features using the absolute or windowed standard deviation (STD) and; (3) locating the PD point. The advantage of this technique is the ability to locate the PD source without the need to know the first arrival time and the propagation velocity of the signals. In addition, the faulty section of the CC line between three measuring points can also be identified based on the degrees of correlation. An experimental analysis is performed to evaluate the PD measurement system performance for PD location on CC overhead lines. The measuring set-up is arranged in a high voltage (HV) laboratory. A multi-end measuring method is chosen as a technique to locate the PD source point on the line. A power transformer 110/20 kV was used to energize the AC voltage up to 11.5 kV/phase (20 kV system). The tests were designed to cover different conditions such as offline and online

  20. Application of support vector machine based on pattern spectrum entropy in fault diagnostics of rolling element bearings

    International Nuclear Information System (INIS)

    Hao, Rujiang; Chu, Fulei; Peng, Zhike; Feng, Zhipeng

    2011-01-01

    This paper presents a novel pattern classification approach for the fault diagnostics of rolling element bearings, which combines the morphological multi-scale analysis and the 'one to others' support vector machine (SVM) classifiers. The morphological pattern spectrum describes the shape characteristics of the inspected signal based on the morphological opening operation with multi-scale structuring elements. The pattern spectrum entropy and the barycenter scale location of the spectrum curve are extracted as the feature vectors presenting different faults of the bearing, which are more effective and representative than the kurtosis and the enveloping demodulation spectrum. The 'one to others' SVM algorithm is adopted to distinguish six kinds of fault signals which were measured in the experimental test rig under eight different working conditions. The recognition results of the SVM are ideal and more precise than those of the artificial neural network even though the training samples are few. The combination of the morphological pattern spectrum parameters and the 'one to others' multi-class SVM algorithm is suitable for the on-line automated fault diagnosis of the rolling element bearings. This application is promising and worth well exploiting

  1. Actuator Fault Diagnosis in a Boeing 747 Model via Adaptive Modified Two-Stage Kalman Filter

    Directory of Open Access Journals (Sweden)

    Fikret Caliskan

    2014-01-01

    Full Text Available An adaptive modified two-stage linear Kalman filtering algorithm is utilized to identify the loss of control effectiveness and the magnitude of low degree of stuck faults in a closed-loop nonlinear B747 aircraft. Control effectiveness factors and stuck magnitudes are used to quantify faults entering control systems through actuators. Pseudorandom excitation inputs are used to help distinguish partial loss and stuck faults. The partial loss and stuck faults in the stabilizer are isolated and identified successfully.

  2. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  3. Modularization of fault trees: a method to reduce the cost of analysis

    International Nuclear Information System (INIS)

    Chatterjee, P.

    1975-01-01

    The problem of analyzing large fault trees is considered. The concept of the finest modular representation of a fault tree is introduced and an algorithm is presented for finding this representation. The algorithm will also identify trees which cannot be modularized. Applications of such modularizations are discussed

  4. Imaging Shear Strength Along Subduction Faults

    Science.gov (United States)

    Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.

    2017-11-01

    Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

  5. Imaging shear strength along subduction faults

    Science.gov (United States)

    Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.

    2017-01-01

    Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

  6. Locative media

    CERN Document Server

    Wilken, Rowan

    2014-01-01

    Not only is locative media one of the fastest growing areas in digital technology, but questions of location and location-awareness are increasingly central to our contemporary engagements with online and mobile media, and indeed media and culture generally. This volume is a comprehensive account of the various location-based technologies, services, applications, and cultures, as media, with an aim to identify, inventory, explore, and critique their cultural, economic, political, social, and policy dimensions internationally. In particular, the collection is organized around the perception that the growth of locative media gives rise to a number of crucial questions concerning the areas of culture, economy, and policy.

  7. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  8. Methods for Fault Diagnosability Analysis of a Class of Affine Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Xiafu Peng

    2015-01-01

    Full Text Available The fault diagnosability analysis for a given model, before developing a diagnosis algorithm, can be used to answer questions like “can the fault fi be detected by observed states?” and “can it separate fault fi from fault fj by observed states?” If not, we should redesign the sensor placement. This paper deals with the problem of the evaluation of detectability and separability for the diagnosability analysis of affine nonlinear system. First, we used differential geometry theory to analyze the nonlinear system and proposed new detectability criterion and separability criterion. Second, the related matrix between the faults and outputs of the system and the fault separable matrix are designed for quantitative fault diagnosability calculation and fault separability calculation, respectively. Finally, we illustrate our approach to exemplify how to analyze diagnosability by a certain nonlinear system example, and the experiment results indicate the effectiveness of the fault evaluation methods.

  9. A new digital ground-fault protection system for generator-transformer unit

    Energy Technology Data Exchange (ETDEWEB)

    Zielichowski, Mieczyslaw; Szlezak, Tomasz [Institute of Electrical Power Engineering, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50370 Wroclaw (Poland)

    2007-08-15

    Ground faults are one of most often reasons of damages in stator windings of large generators. Under certain conditions, as a result of ground-fault protection systems maloperation, ground faults convert into high-current faults, causing severe failures in power system. Numerous publications in renowned journals and magazines testify about ground-fault matter importance and problems reported by exploitators confirm opinions, that some issues concerning ground-fault protection of large generators have not been solved yet or have been solved insufficiently. In this paper a new conception of a digital ground-fault protection system for stator winding of large generator was proposed. The process of intermittent arc ground fault in stator winding has been briefly discussed and actual ground-fault voltage waveforms were presented. A new relaying algorithm, based on third harmonic voltage measurement was also drawn and the methods of its implementation and testing were described. (author)

  10. Breaking the fault tree circular logic

    International Nuclear Information System (INIS)

    Lankin, M.

    2000-01-01

    Event tree - fault tree approach to model failures of nuclear plants as well as of other complex facilities is noticeably dominant now. This approach implies modeling an object in form of unidirectional logical graph - tree, i.e. graph without circular logic. However, genuine nuclear plants intrinsically demonstrate quite a few logical loops (circular logic), especially where electrical systems are involved. This paper shows the incorrectness of existing practice of circular logic breaking by elimination of part of logical dependencies and puts forward a formal algorithm, which enables the analyst to correctly model the failure of complex object, which involves logical dependencies between system and components, in form of fault tree. (author)

  11. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M; Haenninen, S [VTT Energy, Espoo (Finland); Seppaenen, M [North-Carelian Power Co (Finland); Antila, E; Markkila, E [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  12. Postglacial faulting and paleoseismicity in the Landsjaerv area, northern Sweden

    International Nuclear Information System (INIS)

    Lagerbaeck, R.

    1988-10-01

    Post-glacial fault scarps, up to about 20 m in height and forming a 50 km long fault set with a SSW-NNE orientation, occur in the Lansjaerv area in northern Sweden. By trenching across the fault scarps it has been possible to date fault movement relative to the Quaternary stratigraphy. It is concluded that the fault scarps generally developed as single event movements shortly after the deglaciation about 9000 years ago. At one location there are indications that minor fault movements may have occurred earlier during a previous glaciation but this is uncertain. The fault scarps are, at least partially, developed in strongly fractured and chemically weathered zones of presumed pre-Quaternary age. To judge from the appearance of the bedrock fault scarps, and the deformation of the Quaternary deposits, the faults are reverse and have dips between some 40-50 0 and the vertical. The faulting was co-seismic and earthquakes in the order of M 6.5-7.0, or higher, are inferred from fault dimensions and the distribution of seismically triggered landslides in a wider region. Distortions in different types of sediment, interpreted as caused by the influence of seismic shock, occur frequently in the area. Examples of these are briefly described. (orig.)

  13. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    Science.gov (United States)

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  14. New active faults on Eurasian-Arabian collision zone: Tectonic activity of Özyurt and Gülsünler faults (Eastern Anatolian Plateau, Van-Turkey)

    Energy Technology Data Exchange (ETDEWEB)

    Dicle, S.; Üner, S.

    2017-11-01

    The Eastern Anatolian Plateau emerges from the continental collision between Arabian and Eurasian plates where intense seismicity related to the ongoing convergence characterizes the southern part of the plateau. Active deformation in this zone is shared by mainly thrust and strike-slip faults. The Özyurt thrust fault and the Gülsünler sinistral strike-slip fault are newly determined fault zones, located to the north of Van city centre. Different types of faults such as thrust, normal and strike-slip faults are observed on the quarry wall excavated in Quaternary lacustrine deposits at the intersection zone of these two faults. Kinematic analysis of fault-slip data has revealed coeval activities of transtensional and compressional structures for the Lake Van Basin. Seismological and geomorphological characteristics of these faults demonstrate the capability of devastating earthquakes for the area.

  15. New active faults on Eurasian-Arabian collision zone: Tectonic activity of Özyurt and Gülsünler faults (Eastern Anatolian Plateau, Van-Turkey)

    International Nuclear Information System (INIS)

    Dicle, S.; Üner, S.

    2017-01-01

    The Eastern Anatolian Plateau emerges from the continental collision between Arabian and Eurasian plates where intense seismicity related to the ongoing convergence characterizes the southern part of the plateau. Active deformation in this zone is shared by mainly thrust and strike-slip faults. The Özyurt thrust fault and the Gülsünler sinistral strike-slip fault are newly determined fault zones, located to the north of Van city centre. Different types of faults such as thrust, normal and strike-slip faults are observed on the quarry wall excavated in Quaternary lacustrine deposits at the intersection zone of these two faults. Kinematic analysis of fault-slip data has revealed coeval activities of transtensional and compressional structures for the Lake Van Basin. Seismological and geomorphological characteristics of these faults demonstrate the capability of devastating earthquakes for the area.

  16. Influence of mineralogy and microstructures on strain localization and fault zone architecture of the Alpine Fault, New Zealand

    Science.gov (United States)

    Ichiba, T.; Kaneki, S.; Hirono, T.; Oohashi, K.; Schuck, B.; Janssen, C.; Schleicher, A.; Toy, V.; Dresen, G.

    2017-12-01

    The Alpine Fault on New Zealand's South Island is an oblique, dextral strike-slip fault that accommodated the majority of displacement between the Pacific and the Australian Plates and presents the biggest seismic hazard in the region. Along its central segment, the hanging wall comprises greenschist and amphibolite facies Alpine Schists. Exhumation from 35 km depth, along a SE-dipping detachment, lead to mylonitization which was subsequently overprinted by brittle deformation and finally resulted in the fault's 1 km wide damage zone. The geomechanical behavior of a fault is affected by the internal structure of its fault zone. Consequently, studying processes controlling fault zone architecture allows assessing the seismic hazard of a fault. Here we present the results of a combined microstructural (SEM and TEM), mineralogical (XRD) and geochemical (XRF) investigation of outcrop samples originating from several locations along the Alpine Fault, the aim of which is to evaluate the influence of mineralogical composition, alteration and pre-existing fabric on strain localization and to identify the controls on the fault zone architecture, particularly the locus of brittle deformation in P, T and t space. Field observations reveal that the fault's principal slip zone (PSZ) is either a thin (< 1 cm to < 7 cm) layered structure or a relatively thick (10s cm) package lacking a detectable macroscopic fabric. Lithological and related rheological contrasts are widely assumed to govern strain localization. However, our preliminary results suggest that qualitative mineralogical composition has only minor impact on fault zone architecture. Quantities of individual mineral phases differ markedly between fault damage zone and fault core at specific sites, but the quantitative composition of identical structural units such as the fault core, is similar in all samples. This indicates that the degree of strain localization at the Alpine Fault might be controlled by small initial

  17. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  18. The relationship of near-surface active faulting to megathrust splay fault geometry in Prince William Sound, Alaska

    Science.gov (United States)

    Finn, S.; Liberty, L. M.; Haeussler, P. J.; Northrup, C.; Pratt, T. L.

    2010-12-01

    We interpret regionally extensive, active faults beneath Prince William Sound (PWS), Alaska, to be structurally linked to deeper megathrust splay faults, such as the one that ruptured in the 1964 M9.2 earthquake. Western PWS in particular is unique; the locations of active faulting offer insights into the transition at the southern terminus of the previously subducted Yakutat slab to Pacific plate subduction. Newly acquired high-resolution, marine seismic data show three seismic facies related to Holocene and older Quaternary to Tertiary strata. These sediments are cut by numerous high angle normal faults in the hanging wall of megathrust splay. Crustal-scale seismic reflection profiles show splay faults emerging from 20 km depth between the Yakutat block and North American crust and surfacing as the Hanning Bay and Patton Bay faults. A distinct boundary coinciding beneath the Hinchinbrook Entrance causes a systematic fault trend change from N30E in southwestern PWS to N70E in northeastern PWS. The fault trend change underneath Hinchinbrook Entrance may occur gradually or abruptly and there is evidence for similar deformation near the Montague Strait Entrance. Landward of surface expressions of the splay fault, we observe subsidence, faulting, and landslides that record deformation associated with the 1964 and older megathrust earthquakes. Surface exposures of Tertiary rocks throughout PWS along with new apatite-helium dates suggest long-term and regional uplift with localized, fault-controlled subsidence.

  19. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  20. Location, location, location: Extracting location value from house prices

    OpenAIRE

    Kolbe, Jens; Schulz, Rainer; Wersing, Martin; Werwatz, Axel

    2012-01-01

    The price for a single-family house depends both on the characteristics of the building and on its location. We propose a novel semiparametric method to extract location values from house prices. After splitting house prices into building and land components, location values are estimated with adaptive weight smoothing. The adaptive estimator requires neither strong smoothness assumptions nor local symmetry. We apply the method to house transactions from Berlin, Germany. The estimated surface...

  1. Wireless system for location of permanent faults by short circuit current monitoring in electric power distribution network; Sistema wireless para localizacao de faltas permanentes atraves da monitoracao da corrente de curto-circuito em redes de distribuicao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Machado, A.G.; Correa, A.C.; Machado, R.N. das M.; Ferreira, A.M.D.; Pinto, J.A.C. [Instituto Federal de Educacao, Ciencia e Tecnologia do Para (IFPA), Belem, PA (Brazil)], E-mail: alcidesmachado000@yahoo.com.br; Barra Junior, W. [Universidade Federal do Para (UFPA), Belem, PA (Brazil). Inst. de Tecnologia. Faculdade de Engenharia Eletrica], E-mail: walbarra@ufpa.br

    2009-07-01

    This paper presents the development of an automatic system for permanent short-circuits location in medium voltage (13.8 kV) electric power system distribution feeders, by indirect monitoring of the line current. When a permanent failure occurs, the developed system uses mobile telephony (GSM) text messages (SMS) to inform the power company operation center where the failure most likely took place. With this information in real time, the power company operation center may provide the network restoration in a faster and efficient way. (author)

  2. Communication-based fault handling scheme for ungrounded distribution systems

    International Nuclear Information System (INIS)

    Yang, X.; Lim, S.I.; Lee, S.J.; Choi, M.S.

    2006-01-01

    The requirement for high quality and highly reliable power supplies has been increasing as a result of increasing demand for power. At the time of a fault occurrence in a distribution system, some protection method would be dedicated to fault section isolation and service restoration. However, if there are many outage areas when the protection method is performed, it is an inconvenience to the customer. A conventional method to determine a fault section in ungrounded systems requires many successive outage invocations. This paper proposed an efficient fault section isolation method and service restoration method for single line-to-ground fault in an ungrounded distribution system that was faster than the conventional one using the information exchange between connected feeders. The proposed algorithm could be performed without any power supply interruption and could decrease the number of switching operations, so that customers would not experience outages very frequently. The method involved the use of an intelligent communication method and a sequential switching control scheme. The proposed algorithm was also applied in both a single-tie and multi-tie distribution system. This proposed algorithm has been verified through fault simulations in a simple model of ungrounded multi-tie distribution system. The method proposed in this paper was proven to offer more efficient fault identification and much less outage time than the conventional method. The proposed method could contribute to a system design since it is valid in multi-tie systems. 5 refs., 2 tabs., 8 figs

  3. Automatic Fault Characterization via Abnormality-Enhanced Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    2010-12-20

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help to identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.

  4. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding.

    Science.gov (United States)

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-07-06

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches.

  5. Fault tree graphics

    International Nuclear Information System (INIS)

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  6. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    Science.gov (United States)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  7. Incipient fault detection and identification in process systems using accelerating neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Muthusami, J.; Atiya, A.F.

    1994-01-01

    The objective of this paper is to present the development and numerical testing of a robust fault detection and identification (FDI) system using artificial neural networks (ANNs), for incipient (slowly developing) faults occurring in process systems. The challenge in using ANNs in FDI systems arises because of one's desire to detect faults of varying severity, faults from noisy sensors, and multiple simultaneous faults. To address these issues, it becomes essential to have a learning algorithm that ensures quick convergence to a high level of accuracy. A recently developed accelerated learning algorithm, namely a form of an adaptive back propagation (ABP) algorithm, is used for this purpose. The ABP algorithm is used for the development of an FDI system for a process composed of a direct current motor, a centrifugal pump, and the associated piping system. Simulation studies indicate that the FDI system has significantly high sensitivity to incipient fault severity, while exhibiting insensitivity to sensor noise. For multiple simultaneous faults, the FDI system detects the fault with the predominant signature. The major limitation of the developed FDI system is encountered when it is subjected to simultaneous faults with similar signatures. During such faults, the inherent limitation of pattern-recognition-based FDI methods becomes apparent. Thus, alternate, more sophisticated FDI methods become necessary to address such problems. Even though the effectiveness of pattern-recognition-based FDI methods using ANNs has been demonstrated, further testing using real-world data is necessary

  8. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    Science.gov (United States)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  9. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  10. Hull properties in location problems

    DEFF Research Database (Denmark)

    Juel, Henrik; Love, Robert F.

    1983-01-01

    Some properties of the solution set for single and multifacility continuous location problems with lp distances are given. A set reduction algorithm is developed for problems in k-dimensional space having rectangular distances....

  11. Characterization of leaky faults

    International Nuclear Information System (INIS)

    Shan, Chao.

    1990-05-01

    Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs

  12. Solar system fault detection

    Science.gov (United States)

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  13. A Novel Fault Identification Using WAMS/PMU

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2012-05-01

    Full Text Available The important premise of the novel adaptive backup protection based on wide area information is to identify the fault in a real-time and on-line way. In this paper, the principal components analysis theory is introduced into the field of fault detection to locate precisely the fault by mean of the voltage and current phasor data from the PMUs. Massive simulation experiments have fully proven that the fault identification can be performed successfully by principal component analysis and calculation. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Under the influence of noise, the results are still accurate and reliable. So, the principal components fault identification has strong anti-interference ability and great redundancy.

  14. Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.

    Science.gov (United States)

    1995-06-01

    DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth

  15. FAULT DETECTION AND LOCALIZATION IN MOTORCYCLES BASED ON THE CHAIN CODE OF PSEUDOSPECTRA AND ACOUSTIC SIGNALS

    Directory of Open Access Journals (Sweden)

    B. S. Anami

    2013-06-01

    Full Text Available Vehicles produce sound signals with varying temporal and spectral properties under different working conditions. These sounds are indicative of the condition of the engine. Fault diagnosis is a significantly difficult task in geographically remote places where expertise is scarce. Automated fault diagnosis can assist riders to assess the health condition of their vehicles. This paper presents a method for fault detection and location in motorcycles based on the chain code of the pseudospectra and Mel-frequency cepstral coefficient (MFCC features of acoustic signals. The work comprises two stages: fault detection and fault location. The fault detection stage uses the chain code of the pseudospectrum as a feature vector. If the motorcycle is identified as faulty, the MFCCs of the same sample are computed and used as features for fault location. Both stages employ dynamic time warping for the classification of faults. Five types of faults in motorcycles are considered in this work. Observed classification rates are over 90% for the fault detection stage and over 94% for the fault location stage. The work identifies other interesting applications in the development of acoustic fingerprints for fault diagnosis of machinery, tuning of musical instruments, medical diagnosis, etc.

  16. Development of a Fault Monitoring Technique for Wind Turbines Using a Hidden Markov Model.

    Science.gov (United States)

    Shin, Sung-Hwan; Kim, SangRyul; Seo, Yun-Ho

    2018-06-02

    Regular inspection for the maintenance of the wind turbines is difficult because of their remote locations. For this reason, condition monitoring systems (CMSs) are typically installed to monitor their health condition. The purpose of this study is to propose a fault detection algorithm for the mechanical parts of the wind turbine. To this end, long-term vibration data were collected over two years by a CMS installed on a 3 MW wind turbine. The vibration distribution at a specific rotating speed of main shaft is approximated by the Weibull distribution and its cumulative distribution function is utilized for determining the threshold levels that indicate impending failure of mechanical parts. A Hidden Markov model (HMM) is employed to propose the statistical fault detection algorithm in the time domain and the method whereby the input sequence for HMM is extracted is also introduced by considering the threshold levels and the correlation between the signals. Finally, it was demonstrated that the proposed HMM algorithm achieved a greater than 95% detection success rate by using the long-term signals.

  17. Style of the surface deformation by the 1999 Chichi earthquake at the central segment of Chelungpu fault, Taiwan, with special reference to the presence of the main and subsidiary faults and their progressive deformation in the Tsauton area

    Science.gov (United States)

    Ota, Y.; Watanabe, M.; Suzuki, Y.; Yanagida, M.; Miyawaki, A.; Sawa, H.

    2007-11-01

    We describe the style of surface deformation in the 1999 Chichi earthquake in the central segment of the Chelungpu Fault. The study covers the Kung-fu village, north of Han River, to the south of Tsauton area. A characteristic style of the surface deformation is a convex scarp in profile and sinuous plan view, due to the low angle thrust fault. Two subparallel faults, including the west facing Tsauton West fault, and the east facing Tsauton East fault, limit the western and eastern margin of the Tsauton terraced area. The Tsauton West fault is the continuation of the main Chelungpu fault and the Tsauton East fault is located about 2 km apart. Both faults record larger amounts of vertical displacement on the older terraces. The 1999 surface rupture occurred exactly on a pre-existing fault scarp of the Tsauton West and East faults. Thus, repeated activities of these two faults during the Holocene, possibly since the late Quaternary, are confirmed. The amount of vertical offset of the Tsauton East fault is smaller, and about 40-50% of that of the Tsauton West fault for the pre-existing fault. This indicates that the Tsauton East fault is a subsidiary fault and moved together with the main fault, but accommodated less amount.

  18. Fault structure and kinematics of the Long Valley Caldera region, California, revealed by high-accuracy earthquake hypocenters and focal mechanism stress inversions

    Science.gov (United States)

    Prejean, Stephanie; Ellsworth, William L.; Zoback, Mark; Waldhauser, Felix

    2002-01-01

    We have determined high-resolution hypocenters for 45,000+ earthquakes that occurred between 1980 and 2000 in the Long Valley caldera area using a double-difference earthquake location algorithm and routinely determined arrival times. The locations reveal numerous discrete fault planes in the southern caldera and adjacent Sierra Nevada block (SNB). Intracaldera faults include a series of east/west-striking right-lateral strike-slip faults beneath the caldera's south moat and a series of more northerly striking strike-slip/normal faults beneath the caldera's resurgent dome. Seismicity in the SNB south of the caldera is confined to a crustal block bounded on the west by an east-dipping oblique normal fault and on the east by the Hilton Creek fault. Two NE-striking left-lateral strike-slip faults are responsible for most seismicity within this block. To understand better the stresses driving seismicity, we performed stress inversions using focal mechanisms with 50 or more first motions. This analysis reveals that the least principal stress direction systematically rotates across the studied region, from NE to SW in the caldera's south moat to WNW-ESE in Round Valley, 25 km to the SE. Because WNW-ESE extension is characteristic of the western boundary of the Basin and Range province, caldera area stresses appear to be locally perturbed. This stress perturbation does not seem to result from magma chamber inflation but may be related to the significant (???20 km) left step in the locus of extension along the Sierra Nevada/Basin and Range province boundary. This implies that regional-scale tectonic processes are driving seismic deformation in the Long Valley caldera.

  19. Library Locations

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Carnegie Library of Pittsburgh locations including address, coordinates, phone number, square footage, and standard operating hours. The map below does not display...

  20. Fault detection using transmission tomography - Evaluation on the Experimental Platform of Tournemire

    International Nuclear Information System (INIS)

    Vi-Nhu-Ba, Elise

    2014-01-01

    Deep argillaceous formations have physical properties adapted to the radioactive waste disposal but their permeability properties can be modified by the presence of fractured zones; detection of these faulted zones are thus of primary importance. Several experiments have been led by IRSN in the Experimental Platform of Tournemire where faults with small vertical offsets in the deep argillaceous formation have been identified from underground installations. Some previous studies have shown the difficulty to detect this fractured zone from surface acquisitions using reflection or refraction seismic but also with electrical methods. We here propose a new seismic transmission acquisition geometry in where seismic sources are deployed at the surface and receivers are installed in the underground installations. In the scope to process these data, a new tomography algorithm has been developed in order to control the inversion parameters and also to introduce a priori information. Several synthetic tests have been led to reliably analyze the results in terms of resolution and relevance of the final image. A discontinuity of the seismic velocities in the limestones and argillites of the Tournemire Platform is evidenced for the first time by applying the algorithm to the data recently acquired. This low velocity anomaly is located just above the fracture zone visible from the underground installations and its location is also consistent with observations from the surface. (author)