WorldWideScience

Sample records for automated fault extraction

  1. Automated fault extraction and classification using 3-D seismic data for the Ekofisk field development

    Energy Technology Data Exchange (ETDEWEB)

    Signer, C.; Nickel, M.; Randen, T.; Saeter, T.; Soenneland, H.H.

    1998-12-31

    Mapping of fractures is important for the prediction of fluid flow in many reservoir types. The fluid flow depends mainly on the efficiency of the reservoir seals. Improved spatial mapping of the open and closed fracture systems will allow a better prediction of the fluid flow pattern. The primary objectives of this paper is to present fracture characterization at the reservoir scale combined with seismic facies mapping. The complexity of the giant Ekofisk field on the Norwegian continental shelf provides an ideal framework for testing the validity and the applicability of an automated seismic fault and fracture detection and mapping tool. The mapping of the faults can be based on seismic attribute grids, which means that attribute-responses related to faults are extracted along key horizons which were interpreted in the reservoir interval. 3 refs., 3 figs.

  2. Automated extraction of faults and porous reservoir bodies. Examples from the Vallhall Field

    Energy Technology Data Exchange (ETDEWEB)

    Barkved, Olav Inge; Whitman, Doug; Kunz, Tim

    1998-12-31

    The Norwegian Vahall field is located 250 km South-West of Stavanger. The production is primarily from the highly porous and fractured chalk, the Tor formation. Fractures, evidently play a significant role in enhancing flow properties as well as production rates, are significantly higher than expected from matrix permeability alone. The fractures are primarily tectonically induced and related to faulting. Syn-depositional faulting is believed to be a controlling factor on reservoir thickness variations observed across the field. Due to the low acoustic contrast and weak appearance of the highly porous chalk, direct evidence of faulting in well bore logs is limited. The seismic data quality in the most central area of the field is very poor due to tertiary gas charging, but in the flank area of the field, the quality is excellent. 1 ref., 5 figs.

  3. Semi-automated fault system extraction and displacement analysis of an excavated oyster reef using high-resolution laser scanned data

    Science.gov (United States)

    Molnár, Gábor; Székely, Balázs; Harzhauser, Mathias; Djuricic, Ana; Mandic, Oleg; Dorninger, Peter; Nothegger, Clemens; Exner, Ulrike; Pfeifer, Norbert

    2015-04-01

    In this contribution we present a semi-automated method for reconstructing the brittle deformation field of an excavated Miocene oyster reef, in Stetten, Korneuburg Basin, Lower Austria. Oyster shells up to 80 cm in size were scattered in a shallow estuarine bay forming a continuous and almost isochronous layer as a consequence of a catastrophic event in the Miocene. This shell bed was preserved by burial of several hundred meters of sandy to silty sediments. Later the layers were tilted westward, uplifted and erosion almost exhumed them. An excavation revealed a 27 by 17 meters area of the oyster covered layer. During the tectonic processes the sediment volume suffered brittle deformation. Faults mostly with some centimeter normal component and NW-SE striking affected the oyster covered volume, dissecting many shells and the surrounding matrix as well. Faults and displacements due to them can be traced along the site typically at several meters long, and as fossil oysters are broken and parts are displaced due to the faulting, along some faults it is possible to follow these displacements in 3D. In order to quantify these varying displacements and to map the undulating fault traces high-resolution scanning of the excavated and cleaned surface of the oyster bed has been carried out using a terrestrial laser scanner. The resulting point clouds have been co-georeferenced at mm accuracy and a 1mm resolution 3D point cloud of the surface has been created. As the faults are well-represented in the point cloud, this enables us to measure the dislocations of the dissected shell parts along the fault lines. We used a semi-automatic method to quantify these dislocations. First we manually digitized the fault lines in 2D as an initial model. In the next step we estimated the vertical (i.e. perpendicular to the layer) component of the dislocation along these fault lines comparing the elevations on two sides of the faults with moving averaging windows. To estimate the strike

  4. Automated Water Extraction Index

    DEFF Research Database (Denmark)

    Feyisa, Gudina Legese; Meilby, Henrik; Fensholt, Rasmus;

    2014-01-01

    Classifying surface cover types and analyzing changes are among the most common applications of remote sensing. One of the most basic classification tasks is to distinguish water bodies from dry land surfaces. Landsat imagery is among the most widely used sources of data in remote sensing of water...... resources; and although several techniques of surface water extraction using Landsat data are described in the literature, their application is constrained by low accuracy in various situations. Besides, with the use of techniques such as single band thresholding and two-band indices, identifying...... an appropriate threshold yielding the highest possible accuracy is a challenging and time consuming task, as threshold values vary with location and time of image acquisition. The purpose of this study was therefore to devise an index that consistently improves water extraction accuracy in the presence...

  5. Software fault tree analysis of an automated control system device written in Ada

    OpenAIRE

    Winter, Mathias William.

    1995-01-01

    Software Fault Tree Analysis (SFTA) is a technique used to analyze software for faults that could lead to hazardous conditions in systems which contain software components. Previous thesis works have developed three Ada-based, semi-automated software analysis tools, the Automated Code Translation Tool (ACm) an Ada statement template generator, the Fault Tree Editor (Fm) a graphical fault tree editor, and the Fault Isolator (Fl) an automated software fault tree isolator. These previous works d...

  6. Automated Extraction of DNA from clothing

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hjort, Benjamin Benn; Nøhr Hansen, Thomas;

    2011-01-01

    Presence of PCR inhibitors in extracted DNA may interfere with the subsequent quantification and short tandem repeat (STR) reactions used in forensic genetic DNA typing. We have compared three automated DNA extraction methods based on magnetic beads with a manual method with the aim of reducing...... the amount of PCR inhibitors in the DNA extracts and increasing the proportion of reportable DNA profiles....

  7. Automated Extraction of DNA from clothing

    OpenAIRE

    Stangegaard, Michael; Hjort, Benjamin Benn; Nøhr Hansen, Thomas; Hansen, Anders Johannes; Morling, Niels

    2011-01-01

    Presence of PCR inhibitors in extracted DNA may interfere with the subsequent quantification and short tandem repeat (STR) reactions used in forensic genetic DNA typing. We have compared three automated DNA extraction methods based on magnetic beads with a manual method with the aim of reducing the amount of PCR inhibitors in the DNA extracts and increasing the proportion of reportable DNA profiles.

  8. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  9. Optimization-based Method for Automated Road Network Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, D

    2001-09-18

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction.

  10. Fault tolerant strategies for automated operation of nuclear reactors

    International Nuclear Information System (INIS)

    This paper introduces an automatic control system incorporating a number of verification, validation, and command generation tasks with-in a fault-tolerant architecture. The integrated system utilizes recent methods of artificial intelligence such as neural networks and fuzzy logic control. Furthermore, advanced signal processing and nonlinear control methods are also included in the design. The primary goal is to create an on-line capability to validate signals, analyze plant performance, and verify the consistency of commands before control decisions are finalized. The application of this approach to the automated startup of the Experimental Breeder Reactor-II (EBR-II) is performed using a validated nonlinear model. The simulation results show that the advanced concepts have the potential to improve plant availability andsafety

  11. Acceleration of Automated HI Source Extraction

    Science.gov (United States)

    Badenhorst, S. J.; Blyth, S.; Kuttel, M. M.

    2013-10-01

    We aim to enable fast automated extraction of neutral hydrogen (HI) sources from large survey data sets. This requires both handling the large files (>5 TB) to be produced by next-generation interferometers and acceleration of the source extraction algorithm. We develop an efficient multithreaded implementation of the A'Trous wavelet reconstruction algorithm, which we evaluate against the serial implementation in the DUCHAMP package. We also evaluate three memory management libraries (Mmap, Boost and Stxxl) that enable processing of data files too large to fit into main memory, to establish which provides the best performance.

  12. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  13. Operations management system advanced automation: Fault detection isolation and recovery prototyping

    Science.gov (United States)

    Hanson, Matt

    1990-01-01

    The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.

  14. Automated valve condition classification of a reciprocating compressor with seeded faults: experimentation and validation of classification strategy

    Science.gov (United States)

    Lin, Yih-Hwang; Liu, Huai-Sheng; Wu, Chung-Yung

    2009-09-01

    This paper deals with automatic valve condition classification of a reciprocating processor with seeded faults. The seeded faults are considered based on observation of valve faults in practice. They include the misplacement of valve and spring plates, incorrect tightness of the bolts for valve cover or valve seat, softening of the spring plate, and cracked or broken spring plate or valve plate. The seeded faults represent various stages of machine health condition and it is crucial to be able to correctly classify the conditions so that preventative maintenance can be performed before catastrophic breakdown of the compressor occurs. Considering the non-stationary characteristics of the system, time-frequency analysis techniques are applied to obtain the vibration spectrum as time develops. A data reduction algorithm is subsequently employed to extract the fault features from the formidable amount of time-frequency data and finally the probabilistic neural network is utilized to automate the classification process without the intervention of human experts. This study shows that the use of modification indices, as opposed to the original indices, greatly reduces the classification error, from about 80% down to about 20% misclassification for the 15 fault cases. Correct condition classification can be further enhanced if the use of similar fault cases is avoided. It is shown that 6.67% classification error is achievable when using the short-time Fourier transform and the mean variation method for the case of seven seeded faults with 10 training samples used. A stunning 100% correct classification can even be realized when the neural network is well trained with 30 training samples being used.

  15. Integrated Automation System for Rare Earth Countercurrent Extraction Process

    Institute of Scientific and Technical Information of China (English)

    柴天佑; 杨辉

    2004-01-01

    Lower automation level in industrial rare-earth extraction processes results in high production cost, inconsistent product quality and great consumption of resources in China. An integrated automation system for extraction process of rare earth is proposed to realize optimal product indices, such as product purity,recycle rate and output. The optimal control strategy for output component, structure and function of the two-gradcd integrated automation system composed of the process management grade and the process control grade were discussed. This system is successfully applied to a HAB yttrium extraction production process and was found to provide optimal control, optimal operation, optimal management and remarkable benefits.

  16. Fault feature extraction of rolling element bearings using sparse representation

    Science.gov (United States)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  17. PCA Fault Feature Extraction in Complex Electric Power Systems

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2010-08-01

    Full Text Available Electric power system is one of the most complex artificial systems in the world. The complexity is determined by its characteristics about constitution, configuration, operation, organization, etc. The fault in electric power system cannot be completely avoided. When electric power system operates from normal state to failure or abnormal, its electric quantities (current, voltage and angles, etc. may change significantly. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Therefore, utilizing real-time measurements of phasor measurement unit, based on principal components analysis technology, we have extracted successfully the distinct features of fault component. Of course, because of the complexity of different types of faults in electric power system, there still exists enormous problems need a close and intensive study.

  18. Automated extraction of change information from multispectral satellite imagery

    International Nuclear Information System (INIS)

    Seeing the expected technical improvements as to the spatial and spectral resolution, satellite imagery could more and more provide a basis for complex information systems for recognizing and monitoring even small-scale and short-term structural features of interests within nuclear facilities, for instance construction of buildings, plant expansion, changes of the operational status, underground activities etc. The analysis of large volumes of multi sensor satellite data will then definitely require a high degree of automation for (pre-) processing, analysis and interpretation in order to extract the features of interest. Against this background, the present paper focuses on the automated extraction of change information from multispectral satellite imagery

  19. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  20. Toward the automation of road networks extraction processes

    Science.gov (United States)

    Leymarie, Frederic; Boichis, Nicolas; Airault, Sylvain; Jamet, Olivier

    1996-12-01

    Syseca and IGN are working on various steps in the ongoing march from digital photogrammetry to the semi-automation and ultimately the full automation of data manipulation, i.e., capture and analysis. The immediate goals are to reduce the production costs and the data availability delays. Within this context, we have tackle the distinctive problem of 'automated road network extraction.' The methodology adopted is to first study semi-automatic solutions which probably increase the global efficiency of human operators in topographic data capture; in a second step, automatic solutions are designed based upon the gained experience. We report on different (semi-)automatic solutions for the road following algorithm. One key aspect of our method is to have the stages of 'detection' and 'geometric recovery' cooperate together while remaining distinct. 'Detection' is based on a local (texture) analysis of the image, while 'geometric recovery' is concerned with the extraction of 'road objects' for both monocular and stereo information. 'Detection' is a low-level visual process, 'reasoning' directly at the level of image intensities, while the mid-level visual process, 'geometric recovery', uses contextual knowledge about roads, both generic, e.g. parallelism of borders, and specific, e.g. using previously extracted road segments and disparities. We then pursue our 'march' by reporting on steps we are exploring toward full automation. We have in particular made attempts at tackling the automation of the initialization step to start searching in a valid direction.

  1. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  2. Improving Access to Archival Collections with Automated Entity Extraction

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2015-07-01

    Full Text Available The complexity and diversity of archival resources make constructing rich metadata records time consuming and expensive, which in turn limits access to these valuable materials. However, significant automation of the metadata creation process would dramatically reduce the cost of providing access points, improve access to individual resources, and establish connections between resources that would otherwise remain unknown. Using a case study at Oregon Health & Science University as a lens to examine the conceptual and technical challenges associated with automated extraction of access points, we discuss using publically accessible API’s to extract entities (i.e. people, places, concepts, etc. from digital and digitized objects. We describe why Linked Open Data is not well suited for a use case such as ours. We conclude with recommendations about how this method can be used in archives as well as for other library applications.

  3. NEW METHOD FOR WEAK FAULT FEATURE EXTRACTION BASED ON SECOND GENERATION WAVELET TRANSFORM AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Duan Chendong; He Zhengjia; Jiang Hongkai

    2004-01-01

    A new time-domain analysis method that uses second generation wavelet transform (SGWT) for weak fault feature extraction is proposed. To extract incipient fault feature, a biorthogonal wavelet with the characteristics of impact is constructed by using SGWT. Processing detail signal of SGWT with a sliding window devised on the basis of rotating operation cycle, and extracting modulus maximum from each window, fault features in time-domain are highlighted. To make further analysis on the reason of the fault, wavelet package transform based on SGWT is used to process vibration data again. Calculating the energy of each frequency-band, the energy distribution features of the signal are attained. Then taking account of the fault features and the energy distribution, the reason of the fault is worked out. An early impact-rub fault caused by axis misalignment and rotor imbalance is successfully detected by using this method in an oil refinery.

  4. Feature evaluation and extraction based on neural network in analog circuit fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    Yuan Haiying; Chen Guangju; Xie Yongle

    2007-01-01

    Choosing the right characteristic parameter is the key to fault diagnosis in analog circuit.The feature evaluation and extraction methods based on neural network are presented.Parameter evaluation of circuit features is realized by training results from neural network; the superior nonlinear mapping capability is competent for extracting fault features which are normalized and compressed subsequently.The complex classification problem on fault pattern recognition in analog circuit is transferred into feature processing stage by feature extraction based on neural network effectively, which improves the diagnosis efficiency.A fault diagnosis illustration validated this method.

  5. Critical Evaluation of Validation Rules Automated Extraction from Data

    Directory of Open Access Journals (Sweden)

    David Pejcoch

    2014-10-01

    Full Text Available The goal of this article is to critically evaluate a possibility of automatic extraction of such kind of rules which could be later used within a Data Quality Management process for validation of records newly incoming to Information System. For practical demonstration the 4FT-Miner procedure implemented in LISpMiner System was chosen. A motivation for this task is the potential simplification of projects focused on Data Quality Management. Initially, this article is going to critically evaluate a possibility of fully automated extraction with the aim to identify strengths and weaknesses of this approach in comparison to its alternative, when at least some a priori knowledge is available. As a result of practical implementation, this article provides design of recommended process which would be used as a guideline for future projects. Also the question of how to store and maintain extracted rules and how to integrate them with existing tools supporting Data Quality Management is discussed

  6. Arduino-based automation of a DNA extraction system.

    Science.gov (United States)

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile. PMID:26409535

  7. Arduino-based automation of a DNA extraction system.

    Science.gov (United States)

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile.

  8. Automated feature extraction for 3-dimensional point clouds

    Science.gov (United States)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  9. Evaluation of Four Automated Protocols for Extraction of DNA from FTA Cards

    OpenAIRE

    Stangegaard, Michael; Børsting, Claus; Ferrero-Miliani, Laura; Frank-Hansen, Rune; Poulsen, Lena; Hansen, Anders J.; Morling, Niels

    2013-01-01

    Extraction of DNA using magnetic bead-based techniques on automated DNA extraction instruments provides a fast, reliable, and reproducible method for DNA extraction from various matrices. Here, we have compared the yield and quality of DNA extracted from FTA cards using four automated extraction protocols on three different instruments. The extraction processes were repeated up to six times with the same pieces of FTA cards. The sample material on the FTA cards was either blood or buccal cell...

  10. Seismicity on Basement Faults Induced by Simultaneous Fluid Injection-Extraction

    Science.gov (United States)

    Chang, Kyung Won; Segall, Paul

    2016-08-01

    Large-scale carbon dioxide (CO2) injection into geological formations increases pore pressure, potentially inducing seismicity on critically stressed faults by reducing the effective normal stress. In addition, poroelastic expansion of the reservoir alters stresses, both within and around the formation, which may trigger earthquakes without direct pore-pressure diffusion. One possible solution to mitigate injection-induced earthquakes is to simultaneously extract pre-existing pore fluids from the target reservoir. To examine the feasibility of the injection-extraction strategy, we compute the spatiotemporal change in Coulomb stress on basement normal faults, including: (1) the change in poroelastic stresses Δ τ _s+fΔ σ _n, where Δ τ _s and Δ σ _n are changes in shear and normal stress. respectively, and (2) the change in pore-pressure fΔ p. Using the model of (J. Geophys. Res. Solid Earth 99(B2):2601-2618, 1994), we estimate the seismicity rate on basement fault zones. Fluid extraction reduces direct pore-pressure diffusion into conductive faults, generally reducing the risk of induced seismicity. Limited diffusion into/from sealing faults results in negligible pore pressure changes within them. However, fluid extraction can cause enhanced seismicity rates on deep normal faults near the injector as well as shallow normal faults near the producer by poroelastic stressing. Changes in seismicity rate driven by poroelastic response to fluid injection-extraction depends on fault geometry, well operations, and the background stressing rate.

  11. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  12. The Rolling Bearing Fault Feature Extraction Based on the LMD and Envelope Demodulation

    Directory of Open Access Journals (Sweden)

    Jun Ma

    2015-01-01

    Full Text Available Since the working process of rolling bearings is a complex and nonstationary dynamic process, the common time and frequency characteristics of vibration signals are submerged in the noise. Thus, it is the key of fault diagnosis to extract the fault feature from vibration signal. Therefore, a fault feature extraction method for the rolling bearing based on the local mean decomposition (LMD and envelope demodulation is proposed. Firstly, decompose the original vibration signal by LMD to get a series of production functions (PFs. Then dispose the envelope demodulation analysis on PF component. Finally, perform Fourier Transform on the demodulation signals and judge failure condition according to the dominant frequency of the spectrum. The results show that the proposed method can correctly extract the fault characteristics to diagnose faults.

  13. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  14. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    Science.gov (United States)

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  15. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    Directory of Open Access Journals (Sweden)

    Xianglong Chen

    2016-09-01

    Full Text Available Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  16. Faults

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  17. Automated Training Sample Extraction for Global Land Cover Mapping

    Directory of Open Access Journals (Sweden)

    Julien Radoux

    2014-05-01

    Full Text Available Land cover is one of the essential climate variables of the ESA Climate Change Initiative (CCI. In this context, the Land Cover CCI (LC CCI project aims at building global land cover maps suitable for climate modeling based on Earth observation by satellite sensors.  The  challenge  is  to  generate  a  set  of  successive  maps  that  are  both  accurate and consistent over time. To do so, operational methods for the automated classification of optical images are investigated. The proposed approach consists of a locally trained classification using an automated selection of training samples from existing, but outdated land cover information. Combinations of local extraction (based on spatial criteria and self-cleaning of training samples (based on spectral criteria are quantitatively assessed. Two large study areas, one in Eurasia and the other in South America, are considered. The proposed morphological cleaning of the training samples leads to higher accuracies than the statistical outlier removal in the spectral domain. An optimal neighborhood has been identified for the local sample extraction. The results are coherent for the two test areas, showing an improvement of the overall accuracy compared with the original reference datasets and a significant reduction of macroscopic errors. More importantly, the proposed method partly controls the reliability of existing land cover maps as sources of training samples for supervised classification.

  18. Fault Feature Extraction of Rolling Bearing Based on an Improved Cyclical Spectrum Density Method

    Institute of Scientific and Technical Information of China (English)

    LI Min; YANG Jianhong; WANG Xiaojing

    2015-01-01

    The traditional cyclical spectrum density(CSD) method is widely used to analyze the fault signals of rolling bearing. All modulation frequencies are demodulated in the cyclic frequency spectrum. Consequently, recognizing bearing fault type is difficult. Therefore, a new CSD method based on kurtosis(CSDK) is proposed. The kurtosis value of each cyclic frequency is used to measure the modulation capability of cyclic frequency. When the kurtosis value is large, the modulation capability is strong. Thus, the kurtosis value is regarded as the weight coefficient to accumulate all cyclic frequencies to extract fault features. Compared with the traditional method, CSDK can reduce the interference of harmonic frequency in fault frequency, which makes fault characteristics distinct from background noise. To validate the effectiveness of the method,experiments are performed on the simulation signal, the fault signal of the bearing outer race in the test bed, and the signal gathered from the bearing of the blast furnace belt cylinder. Experimental results show that the CSDK is better than the resonance demodulation method and the CSD in extracting fault features and recognizing degradation trends. The proposed method provides a new solution to fault diagnosis in bearings.

  19. Fault feature extraction of rolling bearing based on an improved cyclical spectrum density method

    Science.gov (United States)

    Li, Min; Yang, Jianhong; Wang, Xiaojing

    2015-11-01

    The traditional cyclical spectrum density(CSD) method is widely used to analyze the fault signals of rolling bearing. All modulation frequencies are demodulated in the cyclic frequency spectrum. Consequently, recognizing bearing fault type is difficult. Therefore, a new CSD method based on kurtosis(CSDK) is proposed. The kurtosis value of each cyclic frequency is used to measure the modulation capability of cyclic frequency. When the kurtosis value is large, the modulation capability is strong. Thus, the kurtosis value is regarded as the weight coefficient to accumulate all cyclic frequencies to extract fault features. Compared with the traditional method, CSDK can reduce the interference of harmonic frequency in fault frequency, which makes fault characteristics distinct from background noise. To validate the effectiveness of the method, experiments are performed on the simulation signal, the fault signal of the bearing outer race in the test bed, and the signal gathered from the bearing of the blast furnace belt cylinder. Experimental results show that the CSDK is better than the resonance demodulation method and the CSD in extracting fault features and recognizing degradation trends. The proposed method provides a new solution to fault diagnosis in bearings.

  20. The Orion GN and C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery

    Science.gov (United States)

    King, Ellis; Hart, Jeremy; Odegard, Ryan

    2010-01-01

    The Orion Crew Exploration Vehicle (CET) is being designed to include significantly more automation capability than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. A data-driven flight software architecture will provide an evolvable automation capability to sequence through Guidance, Navigation & Control (GN&C) flight software modes and configurations while maintaining the required flexibility and human control over the automation. This flexibility is a key aspect needed to address the maturation of operational concepts, to permit ground and crew operators to gain trust in the system and mitigate unpredictability in human spaceflight. To allow for mission flexibility and reconfrgurability, a data driven approach is being taken to load the mission event plan as well cis the flight software artifacts associated with the GN&C subsystem. A database of GN&C level sequencing data is presented which manages and tracks the mission specific and algorithm parameters to provide a capability to schedule GN&C events within mission segments. The flight software data schema for performing automated mission sequencing is presented with a concept of operations for interactions with ground and onboard crew members. A prototype architecture for fault identification, isolation and recovery interactions with the automation software is presented and discussed as a forward work item.

  1. Evaluation of Four Automated Protocols for Extraction of DNA from FTA Cards

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Børsting, Claus; Ferrero-Miliani, Laura;

    2013-01-01

    Extraction of DNA using magnetic bead-based techniques on automated DNA extraction instruments provides a fast, reliable, and reproducible method for DNA extraction from various matrices. Here, we have compared the yield and quality of DNA extracted from FTA cards using four automated extraction...... protocols on three different instruments. The extraction processes were repeated up to six times with the same pieces of FTA cards. The sample material on the FTA cards was either blood or buccal cells. With the QIAamp DNA Investigator and QIAsymphony DNA Investigator kits, it was possible to extract DNA...... from the FTA cards in all six rounds of extractions in sufficient amount and quality to obtain complete short tandem repeat (STR) profiles on a QIAcube and a QIAsymphony SP. With the PrepFiler Express kit, almost all the extractable DNA was extracted in the first two rounds of extractions. Furthermore...

  2. Application of Waveform Factors in Extracting Fault Trend of Rotary Machines

    Institute of Scientific and Technical Information of China (English)

    YE Yu-gang; ZUO Yun-bo; HUANG Xiao-bin

    2009-01-01

    Vibration intensity and non-dimensional amplitude parameters are often used to extract the fault trend of rotary machines. But, they are the parameters related to energy, and can not describe the fault trend because of varying load and conditions or too slight change of vibration signal. For this reason, three non-dimensional parameters are presented, namely waveform repeatability factor, waveform jumping factor and waveform similarity factor, called as waveform factors jointly, which are based on statistics analysis for the waveform and sensitive to the change of signal waveform. When they are used to extract the fault trend of rotary machines as a kind of technology of instrument and meter, they can reflect the fault trend better than the vibration intensity, peak amplitude and margin index.

  3. A Fault Feature Extraction Method for Rolling Bearing Based on Pulse Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Jinbao Yao

    2016-01-01

    Full Text Available Shock pulse method is a widely used technique for condition monitoring of rolling bearing. However, it may cause erroneous diagnosis in the presence of strong background noise or other shock sources. Aiming at overcoming the shortcoming, a pulse adaptive time-frequency transform method is proposed to extract the fault features of the damaged rolling bearing. The method arranges the rolling bearing shock pulses extracted by shock pulse method in the order of time and takes the reciprocal of the time interval between the pulse at any moment and the other pulse as all instantaneous frequency components in the moment. And then it visually displays the changing rule of each instantaneous frequency after plane transformation of the instantaneous frequency components, realizes the time-frequency transform of shock pulse sequence through time-frequency domain amplitude relevancy processing, and highlights the fault feature frequencies by effective instantaneous frequency extraction, so as to extract the fault features of the damaged rolling bearing. The results of simulation and application show that the proposed method can suppress the noises well, highlight the fault feature frequencies, and avoid erroneous diagnosis, so it is an effective fault feature extraction method for the rolling bearing with high time-frequency resolution.

  4. The Fault Feature Extraction of Rolling Bearing Based on EMD and Difference Spectrum of Singular Value

    Directory of Open Access Journals (Sweden)

    Te Han

    2016-01-01

    Full Text Available Nowadays, the fault diagnosis of rolling bearing in aeroengines is based on the vibration signal measured on casing, instead of bearing block. However, the vibration signal of the bearing is often covered by a series of complex components caused by other structures (rotor, gears. Therefore, when bearings cause failure, it is still not certain that the fault feature can be extracted from the vibration signal on casing. In order to solve this problem, a novel fault feature extraction method for rolling bearing based on empirical mode decomposition (EMD and the difference spectrum of singular value is proposed in this paper. Firstly, the vibration signal is decomposed by EMD. Next, the difference spectrum of singular value method is applied. The study finds that each peak on the difference spectrum corresponds to each component in the original signal. According to the peaks on the difference spectrum, the component signal of the bearing fault can be reconstructed. To validate the proposed method, the bearing fault data collected on the casing are analyzed. The results indicate that the proposed rolling bearing diagnosis method can accurately extract the fault feature that is submerged in other component signals and noise.

  5. PCA Fault Feature Extraction in Complex Electric Power Systems

    OpenAIRE

    ZHANG, J.; Z. Wang; Zhang, Y.; J. MA

    2010-01-01

    Electric power system is one of the most complex artificial systems in the world. The complexity is determined by its characteristics about constitution, configuration, operation, organization, etc. The fault in electric power system cannot be completely avoided. When electric power system operates from normal state to failure or abnormal, its electric quantities (current, voltage and angles, etc.) may change significantly. Our researches indicate that the variable with the biggest coeffic...

  6. Envelope extraction based dimension reduction for independent component analysis in fault diagnosis of rolling element bearing

    Science.gov (United States)

    Guo, Yu; Na, Jing; Li, Bin; Fung, Rong-Fong

    2014-06-01

    A robust feature extraction scheme for the rolling element bearing (REB) fault diagnosis is proposed by combining the envelope extraction and the independent component analysis (ICA). In the present approach, the envelope extraction is not only utilized to obtain the impulsive component corresponding to the faults from the REB, but also to reduce the dimension of vibration sources included in the sensor-picked signals. Consequently, the difficulty for applying the ICA algorithm under the conditions that the sensor number is limited and the source number is unknown can be successfully eliminated. Then, the ICA algorithm is employed to separate the envelopes according to the independence of vibration sources. Finally, the vibration features related to the REB faults can be separated from disturbances and clearly exposed by the envelope spectrum. Simulations and experimental tests are conducted to validate the proposed method.

  7. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    Science.gov (United States)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  8. An approach for automated fault diagnosis based on a fuzzy decision tree and boundary analysis of a reconstructed phase space.

    Science.gov (United States)

    Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan

    2014-03-01

    Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. PMID:24296116

  9. A Feature Extraction Method for Fault Classification of Rolling Bearing based on PCA

    Science.gov (United States)

    Wang, Fengtao; Sun, Jian; Yan, Dawen; Zhang, Shenghua; Cui, Liming; Xu, Yong

    2015-07-01

    This paper discusses the fault feature selection using principal component analysis (PCA) for bearing faults classification. Multiple features selected from the time-frequency domain parameters of vibration signals are analyzed. First, calculate the time domain statistical features, such as root mean square and kurtosis; meanwhile, by Fourier transformation and Hilbert transformation, the frequency statistical features are extracted from the frequency spectrum. Then the PCA is used to reduce the dimension of feature vectors drawn from raw vibration signals, which can improve real time performance and accuracy of the fault diagnosis. Finally, a fuzzy C-means (FCM) model is established to implement the diagnosis of rolling bearing faults. Practical rolling bearing experiment data is used to verify the effectiveness of the proposed method.

  10. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  11. Automated Fault Diagnostics, Prognostics, and Recovery in Spacecraft Power Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault detection and isolation (FDI) in spacecraft's electrical power system (EPS) has always received special attention. However, the power systems health...

  12. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  13. Automated extraction of DNA and PCR setup using a Tecan Freedom EVO® liquid handler

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Frøslev, Tobias G.; Frank-Hansen, Rune;

    2009-01-01

    We have implemented and validated automated methods for DNA extraction and PCR setup developed for a Tecan Freedom EVO« liquid handler mounted with a Te-MagS(TM) magnetic separation device. The DNA was extracted using the Qiagen MagAttract« DNA Mini M48 kit. The DNA was amplified using Amp...

  14. Manifold Learning with Self-Organizing Mapping for Feature Extraction of Nonlinear Faults in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Lin Liang

    2015-01-01

    Full Text Available A new method for extracting the low-dimensional feature automatically with self-organization mapping manifold is proposed for the detection of rotating mechanical nonlinear faults (such as rubbing, pedestal looseness. Under the phase space reconstructed by single vibration signal, the self-organization mapping (SOM with expectation maximization iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment algorithm is adopted to compress the high-dimensional phase space into low-dimensional feature space. The proposed method takes advantages of the manifold learning in low-dimensional feature extraction and adaptive neighborhood construction of SOM and can extract intrinsic fault features of interest in two dimensional projection space. To evaluate the performance of the proposed method, the Lorenz system was simulated and rotation machinery with nonlinear faults was obtained for test purposes. Compared with the holospectrum approaches, the results reveal that the proposed method is superior in identifying faults and effective for rotating machinery condition monitoring.

  15. Feature extraction of induction motor stator fault based on particle swarm optimization and wavelet packet

    Institute of Scientific and Technical Information of China (English)

    WANG Pan-pan; SHI Li-ping; HU Yong-jun; MIAO Chang-xin

    2012-01-01

    To effectively extract the interturn short circuit fault features of induction motor from stator current signal,a novel feature extraction method based on the bare-bones particle swarm optimization (BBPSO) algorithm and wavelet packet was proposed.First,according to the maximum inner product between the current signal and the cosine basis functions,this method could precisely estimate the waveform parameters of the fundamental component using the powerful global search capability of the BBPSO,which can eliminate the fundamental component and not affect other harmonic components.Then,the harmonic components of residual current signal were decomposed to a series of frequency bands by wavelet packet to extract the interturn circuit fault features of the induction motor.Finally,the results of simulation and laboratory tests demonstrated the effectiveness of the proposed method.

  16. RFI detection by automated feature extraction and statistical analysis

    OpenAIRE

    Winkel, Benjamin; Kerp, Juergen; Stanko, Stephan

    2006-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorit...

  17. Automation System in Rare Earths Countercurrent Extraction Processes

    Institute of Scientific and Technical Information of China (English)

    贾江涛; 严纯华; 廖春生; 吴声; 王明文; 李标国

    2001-01-01

    Based on the countercurrent extraction theory for optimized designing and simulating, the rare earth separation processes, the selection of the detecting points (stages) and on-line analysis for elements, the simulation of open loop response and its response speed, the diagnosis and the regulative prescription for running the solvent extraction cascades were studied.

  18. Automated extraction of DNA from biological stains on fabric from crime cases. A comparison of a manual and three automated methods

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hjort, Benjamin B; Hansen, Thomas N;

    2013-01-01

    that may be co-extracted with the DNA. Using 120 forensic trace evidence samples consisting of various types of fabric, we compared three automated DNA extraction methods based on magnetic beads (PrepFiler Express Forensic DNA Extraction Kit on an AutoMate Express, QIAsyphony DNA Investigator kit either......The presence of PCR inhibitors in extracted DNA may interfere with the subsequent quantification and short tandem repeat (STR) reactions used in forensic genetic DNA typing. DNA extraction from fabric for forensic genetic purposes may be challenging due to the occasional presence of PCR inhibitors...

  19. Weak fault feature extraction of rolling bearing based on cyclic Wiener filter and envelope spectrum

    Science.gov (United States)

    Ming, Yang; Chen, Jin; Dong, Guangming

    2011-07-01

    In vibration analysis, weak fault feature extraction under strong background noise is of great importance. A method based on cyclic Wiener filter and envelope spectrum analysis is proposed. Cyclic Wiener filter exploits the spectral coherence theory induced by the second-order cyclostationary signal. The original signal is duplicated and shifted in the frequency domain by amounts corresponding to the cyclic frequencies. The noise component is optimally filtered by a filter-bank. The filtered signal is analyzed by performing envelope spectrum. In the envelope spectrum, characteristic frequencies are quite clear. Then the most impactive part is effectively extracted for further fault diagnosis. The effectiveness of the method is demonstrated on both simulated signal and actual data from rolling bearing accelerated life test.

  20. Automated rapid finite fault inversion for megathrust earthquakes: Application to the Maule (2010), Iquique (2014) and Illapel (2015) great earthquakes

    Science.gov (United States)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2016-04-01

    Rapid estimation of the spatial and temporal rupture characteristics of large megathrust earthquakes by finite fault inversion is important for disaster mitigation. For example, estimates of the spatio-temporal evolution of rupture can be used to evaluate population exposure to tsunami waves and ground shaking soon after the event by providing more accurate predictions than possible with point source approximations. In addition, rapid inversion results can reveal seismic source complexity to guide additional, more detailed subsequent studies. This work develops a method to rapidly estimate the slip distribution of megathrust events while reducing subjective parameter choices by automation. The method is simple yet robust and we show that it provides excellent preliminary rupture models as soon as 30 minutes for three great earthquakes in the South-American subduction zone. This may slightly change for other regions depending on seismic station coverage but method can be applied to any subduction region. The inversion is based on W-phase data since it is rapidly and widely available and of low amplitude which avoids clipping at close stations for large events. In addition, prior knowledge of the slab geometry (e.g. SLAB 1.0) is applied and rapid W-phase point source information (time delay and centroid location) is used to constrain the fault geometry and extent. Since the linearization by multiple time window (MTW) parametrization requires regularization, objective smoothing is achieved by the discrepancy principle in two fully automated steps. First, the residuals are estimated assuming unknown noise levels, and second, seeking a subsequent solution which fits the data to noise level. The MTW scheme is applied with positivity constraints and a solution is obtained by an efficient non-negative least squares solver. Systematic application of the algorithm to the Maule (2010), Iquique (2014) and Illapel (2015) events illustrates that rapid finite fault inversion with

  1. Acoustic diagnosis of mechanical fault feature based on reference signal frequency domain semi-blind extraction

    OpenAIRE

    Zeguang YI; Pan, Nan; Liu, Feng

    2015-01-01

    Aiming at fault diagnosis problems caused by complex machinery parts, serious background noises and the application limitations of traditional blind signal processing algorithm to the mechanical acoustic signal processing, a failure acoustic diagnosis based on reference signal frequency domain semi-blind extraction is proposed. Key technologies are introduced: Based on frequency-domain blind deconvolution algorithm, the artificial fish swarm algorithm which is good for global optimization is ...

  2. Extracting invariable fault features of rotating machines with multi-ICA networks

    Institute of Scientific and Technical Information of China (English)

    焦卫东; 杨世锡; 吴昭同

    2003-01-01

    This paper proposes novel multi-layer neural networks based on Independent Component Analysis for feature extraction of fault modes. By the use of ICA, invariable features embedded in multi-channel vibration measurements under different operating conditions (rotating speed and/or load) can be captured together.Thus, stable MLP classifiers insensitive to the variation of operation conditions are constructed. The successful results achieved by selected experiments indicate great potential of ICA in health condition monitoring of rotating machines.

  3. Manifold Learning with Self-Organizing Mapping for Feature Extraction of Nonlinear Faults in Rotating Machinery

    OpenAIRE

    Lin Liang; Fei Liu; Maolin Li; Guanghua Xu

    2015-01-01

    A new method for extracting the low-dimensional feature automatically with self-organization mapping manifold is proposed for the detection of rotating mechanical nonlinear faults (such as rubbing, pedestal looseness). Under the phase space reconstructed by single vibration signal, the self-organization mapping (SOM) with expectation maximization iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment ...

  4. Fully Automated Electro Membrane Extraction Autosampler for LC-MS Systems Allowing Soft Extractions for High-Throughput Applications.

    Science.gov (United States)

    Fuchs, David; Pedersen-Bjergaard, Stig; Jensen, Henrik; Rand, Kasper D; Honoré Hansen, Steen; Petersen, Nickolaj Jacob

    2016-07-01

    The current work describes the implementation of electro membrane extraction (EME) into an autosampler for high-throughput analysis of samples by EME-LC-MS. The extraction probe was built into a luer lock adapter connected to a HTC PAL autosampler syringe. As the autosampler drew sample solution, analytes were extracted into the lumen of the extraction probe and transferred to a LC-MS system for further analysis. Various parameters affecting extraction efficacy were investigated including syringe fill strokes, syringe pull up volume, pull up delay and volume in the sample vial. The system was optimized for soft extraction of analytes and high sample throughput. Further, it was demonstrated that by flushing the EME-syringe with acidic wash buffer and reverting the applied electric potential, carry-over between samples can be reduced to below 1%. Performance of the system was characterized (RSD, high extraction speed of EME, a complete analytical workflow of purification, separation, and analysis of sample could be achieved within only 5.5 min. With the developed system large sequences of samples could be analyzed in a completely automated manner. This high degree of automation makes the developed EME-autosampler a powerful tool for a wide range of applications where high-throughput extractions are required before sample analysis. PMID:27237618

  5. Automated microfluidic DNA/RNA extraction with both disposable and reusable components

    International Nuclear Information System (INIS)

    An automated microfluidic nucleic extraction system was fabricated with a multilayer polydimethylsiloxane (PDMS) structure that consists of sample wells, microvalves, a micropump and a disposable microfluidic silica cartridge. Both the microvalves and micropump structures were fabricated in a single layer and are operated pneumatically using a 100 µm PDMS membrane. To fabricate the disposable microfluidic silica cartridge, two-cavity structures were made in a PDMS replica to fit the stacked silica membranes. A handheld controller for the microvalves and pumps was developed to enable system automation. With purified ribonucleic acid (RNA), whole blood and E. coli samples, the automated microfluidic nucleic acid extraction system was validated with a guanidine-based solid phase extraction procedure. An extraction efficiency of ∼90% for deoxyribonucleic acid (DNA) and ∼54% for RNA was obtained in 12 min from whole blood and E. coli samples, respectively. In addition, the same quantity and quality of extracted DNA was confirmed by polymerase chain reaction (PCR) amplification. The PCR also presented the appropriate amplification and melting profiles. Automated, programmable fluid control and physical separation of the reusable components and the disposable components significantly decrease the assay time and manufacturing cost and increase the flexibility and compatibility of the system with downstream components

  6. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  7. Natural Environment Modeling and Fault-Diagnosis for Automated Agricultural Vehicle

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2008-01-01

    This paper presents results for an automatic navigation system for agricultural vehicles. The system uses stereo-vision, inertial sensors and GPS. Special emphasis has been placed on modeling the natural environment in conjunction with a fault-tolerant navigation system. The results are exemplified...

  8. Fault feature extraction and enhancement of rolling element bearing in varying speed condition

    Science.gov (United States)

    Ming, A. B.; Zhang, W.; Qin, Z. Y.; Chu, F. L.

    2016-08-01

    In engineering applications, the variability of load usually varies the shaft speed, which further degrades the efficacy of the diagnostic method based on the hypothesis of constant speed analysis. Therefore, the investigation of the diagnostic method suitable for the varying speed condition is significant for the bearing fault diagnosis. In this instance, a novel fault feature extraction and enhancement procedure was proposed by the combination of the iterative envelope analysis and a low pass filtering operation in this paper. At first, based on the analytical model of the collected vibration signal, the envelope signal was theoretically calculated and the iterative envelope analysis was improved for the varying speed condition. Then, a feature enhancement procedure was performed by applying a low pass filter on the temporal envelope obtained by the iterative envelope analysis. Finally, the temporal envelope signal was transformed to the angular domain by the computed order tracking and the fault feature was extracted on the squared envelope spectrum. Simulations and experiments were used to validate the efficacy of the theoretical analysis and proposed procedure. It is shown that the computed order tracking method is recommended to be applied on the envelope of the signal in order to avoid the energy spreading and amplitude distortion. Compared with the feature enhancement method performed by the fast kurtogram and corresponding optimal band pass filtering, the proposed method can efficiently extract the fault character in the varying speed condition with less amplitude attenuation. Furthermore, do not involve the center frequency estimation, the proposed method is more concise for engineering applications.

  9. Towards Automated Education Demand-Offer Information Monitoring: the Information Extraction

    OpenAIRE

    Rudzājs, P

    2012-01-01

    Dynamically changing work environment in knowledge economy causes the changes in knowledge requirements for labor. Therefore it becomes more and more important to be constantly aware of what education is currently demanded and what education is currently offered. The IT solution is vital to process various information sources, extract education information, and provide analysis mechanisms in automated manner. The education information extraction is detailed in this paper in the context of Edu...

  10. Feature Extraction Using Discrete Wavelet Transform for Gear Fault Diagnosis of Wind Turbine Gearbox

    Directory of Open Access Journals (Sweden)

    Rusmir Bajric

    2016-01-01

    Full Text Available Vibration diagnosis is one of the most common techniques in condition evaluation of wind turbine equipped with gearbox. On the other side, gearbox is one of the key components of wind turbine drivetrain. Due to the stochastic operation of wind turbines, the gearbox shaft rotating speed changes with high percentage, which limits the application of traditional vibration signal processing techniques, such as fast Fourier transform. This paper investigates a new approach for wind turbine high speed shaft gear fault diagnosis using discrete wavelet transform and time synchronous averaging. First, the vibration signals are decomposed into a series of subbands signals with the use of a multiresolution analytical property of the discrete wavelet transform. Then, 22 condition indicators are extracted from the TSA signal, residual signal, and difference signal. Through the case study analysis, a new approach reveals the most relevant condition indicators based on vibrations that can be used for high speed shaft gear spalling fault diagnosis and their tracking abilities for fault degradation progression. It is also shown that the proposed approach enhances the gearbox fault diagnosis ability in wind turbines. The approach presented in this paper was programmed in Matlab environment using data acquired on a 2 MW wind turbine.

  11. Fault Tolerant Modular Linear Motor for Safe-Critical Automated Industrial Applications

    Directory of Open Access Journals (Sweden)

    Loránd SZABÓ

    2009-05-01

    Full Text Available In various safe-critical industrial, medical and defence applications the translational movements are performed by linear motors. In such applications both the motor and its power converter should be fault tolerant. To fulfil this assignment redesigned motorstructures with novel phase connections must be used. In the paper a modular double salient permanent magnet linear motor is studied. Its phases are split into independent channels. The study on the fault tolerant capability of the linear motor was performed via cosimulation, using the Flux-to-Simulink Technology. The conclusions of the paper could help the users to select the optimal linear motor topology for their certain application, function of the required meantraction force and its acceptable ripples.

  12. Time-Frequency Fault Feature Extraction for Rolling Bearing Based on the Tensor Manifold Method

    Directory of Open Access Journals (Sweden)

    Fengtao Wang

    2014-01-01

    Full Text Available Rolling-bearing faults can be effectively reflected using time-frequency characteristics. However, there are inevitable interference and redundancy components in the conventional time-frequency characteristics. Therefore, it is critical to extract the sensitive parameters that reflect the rolling-bearing state from the time-frequency characteristics to accurately classify rolling-bearing faults. Thus, a new tensor manifold method is proposed. First, we apply the Hilbert-Huang transform (HHT to rolling-bearing vibration signals to obtain the HHT time-frequency spectrum, which can be transformed into the HHT time-frequency energy histogram. Then, the tensor manifold time-frequency energy histogram is extracted from the traditional HHT time-frequency spectrum using the tensor manifold method. Five time-frequency characteristic parameters are defined to quantitatively depict the failure characteristics. Finally, the tensor manifold time-frequency characteristic parameters and probabilistic neural network (PNN are combined to effectively classify the rolling-bearing failure samples. Engineering data are used to validate the proposed method. Compared with traditional HHT time-frequency characteristic parameters, the information redundancy of the time-frequency characteristics is greatly reduced using the tensor manifold time-frequency characteristic parameters and different rolling-bearing fault states are more effectively distinguished when combined with the PNN.

  13. Impulse feature extraction method for machinery fault detection using fusion sparse coding and online dictionary learning

    Directory of Open Access Journals (Sweden)

    Deng Sen

    2015-04-01

    Full Text Available Impulse components in vibration signals are important fault features of complex machines. Sparse coding (SC algorithm has been introduced as an impulse feature extraction method, but it could not guarantee a satisfactory performance in processing vibration signals with heavy background noises. In this paper, a method based on fusion sparse coding (FSC and online dictionary learning is proposed to extract impulses efficiently. Firstly, fusion scheme of different sparse coding algorithms is presented to ensure higher reconstruction accuracy. Then, an improved online dictionary learning method using FSC scheme is established to obtain redundant dictionary and it can capture specific features of training samples and reconstruct the sparse approximation of vibration signals. Simulation shows that this method has a good performance in solving sparse coefficients and training redundant dictionary compared with other methods. Lastly, the proposed method is further applied to processing aircraft engine rotor vibration signals. Compared with other feature extraction approaches, our method can extract impulse features accurately and efficiently from heavy noisy vibration signal, which has significant supports for machinery fault detection and diagnosis.

  14. Feature Extraction Method of Rolling Bearing Fault Signal Based on EEMD and Cloud Model Characteristic Entropy

    Directory of Open Access Journals (Sweden)

    Long Han

    2015-09-01

    Full Text Available The randomness and fuzziness that exist in rolling bearings when faults occur result in uncertainty in acquisition signals and reduce the accuracy of signal feature extraction. To solve this problem, this study proposes a new method in which cloud model characteristic entropy (CMCE is set as the signal characteristic eigenvalue. This approach can overcome the disadvantages of traditional entropy complexity in parameter selection when solving uncertainty problems. First, the acoustic emission signals under normal and damage rolling bearing states collected from the experiments are decomposed via ensemble empirical mode decomposition. The mutual information method is then used to select the sensitive intrinsic mode functions that can reflect signal characteristics to reconstruct the signal and eliminate noise interference. Subsequently, CMCE is set as the eigenvalue of the reconstructed signal. Finally, through the comparison of experiments between sample entropy, root mean square and CMCE, the results show that CMCE can better represent the characteristic information of the fault signal.

  15. Automated extraction improves multiplex molecular detection of infection in septic patients.

    Directory of Open Access Journals (Sweden)

    Benito J Regueiro

    Full Text Available Sepsis is one of the leading causes of morbidity and mortality in hospitalized patients worldwide. Molecular technologies for rapid detection of microorganisms in patients with sepsis have only recently become available. LightCycler SeptiFast test M(grade (Roche Diagnostics GmbH is a multiplex PCR analysis able to detect DNA of the 25 most frequent pathogens in bloodstream infections. The time and labor saved while avoiding excessive laboratory manipulation is the rationale for selecting the automated MagNA Pure compact nucleic acid isolation kit-I (Roche Applied Science, GmbH as an alternative to conventional SeptiFast extraction. For the purposes of this study, we evaluate extraction in order to demonstrate the feasibility of automation. Finally, a prospective observational study was done using 106 clinical samples obtained from 76 patients in our ICU. Both extraction methods were used in parallel to test the samples. When molecular detection test results using both manual and automated extraction were compared with the data from blood cultures obtained at the same time, the results show that SeptiFast with the alternative MagNA Pure compact extraction not only shortens the complete workflow to 3.57 hrs., but also increases sensitivity of the molecular assay for detecting infection as defined by positive blood culture confirmation.

  16. Automated DNA extraction of single dog hairs without roots for mitochondrial DNA analysis.

    Science.gov (United States)

    Bekaert, Bram; Larmuseau, Maarten H D; Vanhove, Maarten P M; Opdekamp, Anouschka; Decorte, Ronny

    2012-03-01

    Dogs are intensely integrated in human social life and their shed hairs can play a major role in forensic investigations. The overall aim of this study was to validate a semi-automated extraction method for mitochondrial DNA analysis of telogenic dog hairs. Extracted DNA was amplified with a 95% success rate from 43 samples using two new experimental designs in which the mitochondrial control region was amplified as a single large (± 1260 bp) amplicon or as two individual amplicons (HV1 and HV2; ± 650 and 350 bp) with tailed-primers. The results prove that the extraction of dog hair mitochondrial DNA can easily be automated to provide sufficient DNA yield for the amplification of a forensically useful long mitochondrial DNA fragment or alternatively two short fragments with minimal loss of sequence in case of degraded samples.

  17. Automated extraction of lexical meanings from Polish corpora: potentialities and limitations

    Directory of Open Access Journals (Sweden)

    Maciej Piasecki

    2015-11-01

    Full Text Available Automated extraction of lexical meanings from Polish corpora: potentialities and limitations Large corpora are often consulted by linguists as a knowledge source with respect to lexicon, morphology or syntax. However, there are also several methods of automated extraction of semantic properties of language units from corpora. In the paper we focus on emerging potentialities of these methods, as well as on their identified limitations. Evidence that can be collected from corpora is confronted with the existing models of formalised description of lexical meanings. Two basic paradigms of lexical semantics extraction are briefly described. Their properties are analysed on the basis of several experiments performed on Polish corpora. Several potential applications of the methods, including a system supporting expansion of a Polish wordnet, are discussed. Finally, perspectives on the potential further development are discussed.

  18. An Automated Video Object Extraction System Based on Spatiotemporal Independent Component Analysis and Multiscale Segmentation

    Directory of Open Access Journals (Sweden)

    Zhang Xiao-Ping

    2006-01-01

    Full Text Available Video content analysis is essential for efficient and intelligent utilizations of vast multimedia databases over the Internet. In video sequences, object-based extraction techniques are important for content-based video processing in many applications. In this paper, a novel technique is developed to extract objects from video sequences based on spatiotemporal independent component analysis (stICA and multiscale analysis. The stICA is used to extract the preliminary source images containing moving objects in video sequences. The source image data obtained after stICA analysis are further processed using wavelet-based multiscale image segmentation and region detection techniques to improve the accuracy of the extracted object. An automated video object extraction system is developed based on these new techniques. Preliminary results demonstrate great potential for the new stICA and multiscale-segmentation-based object extraction system in content-based video processing applications.

  19. The Hybrid KICA-GDA-LSSVM Method Research on Rolling Bearing Fault Feature Extraction and Classification

    Directory of Open Access Journals (Sweden)

    Jiyong Li

    2015-01-01

    Full Text Available Rolling element bearings are widely used in high-speed rotating machinery; thus proper monitoring and fault diagnosis procedure to avoid major machine failures is necessary. As feature extraction and classification based on vibration signals are important in condition monitoring technique, and superfluous features may degrade the classification performance, it is needed to extract independent features, so LSSVM (least square support vector machine based on hybrid KICA-GDA (kernel independent component analysis-generalized discriminate analysis is presented in this study. A new method named sensitive subband feature set design (SSFD based on wavelet packet is also presented; using proposed variance differential spectrum method, the sensitive subbands are selected. Firstly, independent features are obtained by KICA; the feature redundancy is reduced. Secondly, feature dimension is reduced by GDA. Finally, the projected feature is classified by LSSVM. The whole paper aims to classify the feature vectors extracted from the time series and magnitude of spectral analysis and to discriminate the state of the rolling element bearings by virtue of multiclass LSSVM. Experimental results from two different fault-seeded bearing tests show good performance of the proposed method.

  20. Automated Smiley Face Extraction Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Md Alamgir Hossain

    2012-07-01

    Full Text Available Facial expression scrutiny has attracted tremendous consciousness in the area of computer vision because it plays a prime role in the domain of human-machine communication. Smiley face expressions are generated by slimming down of facial muscles, which results in temporally buckled facial features such as eye lids, eye brows, nose, lips, and skin texture. These are evaluated by three characteristics: those portions of the face that will take part for facial action, the intensity of facial actions, and the dynamics of facial actions. In this paper we propose a real-time, accurate, and robust smile detection method based on genetic algorithm. We generated leaf-matrix to extract target expression. Finally, we have compared our methodology with the smile shutter function of Canon Camera. We have achieved better performance than Sony on slight smile.

  1. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    Science.gov (United States)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  2. Feature Extraction Using Discrete Wavelet Transform for Gear Fault Diagnosis of Wind Turbine Gearbox

    DEFF Research Database (Denmark)

    Bajric, Rusmir; Zuber, Ninoslav; Skrimpas, Georgios Alexandros;

    2016-01-01

    with high percentage, which limits the application of traditional vibration signal processing techniques, such as fast Fourier transform. This paper investigates a new approach for wind turbine high speed shaft gear fault diagnosis using discrete wavelet transform and time synchronous averaging. First......, the vibration signals are decomposed into a series of subbands signals with the use of amultiresolution analytical property of the discrete wavelet transform.Then, 22 condition indicators are extracted fromthe TSA signal, residual signal, and difference signal.Through the case study analysis, a new approach...

  3. An automated approach for extracting Barrier Island morphology from digital elevation models

    Science.gov (United States)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  4. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    Science.gov (United States)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  5. Automated renal histopathology: digital extraction and quantification of renal pathology

    Science.gov (United States)

    Sarder, Pinaki; Ginley, Brandon; Tomaszewski, John E.

    2016-03-01

    The branch of pathology concerned with excess blood serum proteins being excreted in the urine pays particular attention to the glomerulus, a small intertwined bunch of capillaries located at the beginning of the nephron. Normal glomeruli allow moderate amount of blood proteins to be filtered; proteinuric glomeruli allow large amount of blood proteins to be filtered. Diagnosis of proteinuric diseases requires time intensive manual examination of the structural compartments of the glomerulus from renal biopsies. Pathological examination includes cellularity of individual compartments, Bowman's and luminal space segmentation, cellular morphology, glomerular volume, capillary morphology, and more. Long examination times may lead to increased diagnosis time and/or lead to reduced precision of the diagnostic process. Automatic quantification holds strong potential to reduce renal diagnostic time. We have developed a computational pipeline capable of automatically segmenting relevant features from renal biopsies. Our method first segments glomerular compartments from renal biopsies by isolating regions with high nuclear density. Gabor texture segmentation is used to accurately define glomerular boundaries. Bowman's and luminal spaces are segmented using morphological operators. Nuclei structures are segmented using color deconvolution, morphological processing, and bottleneck detection. Average computation time of feature extraction for a typical biopsy, comprising of ~12 glomeruli, is ˜69 s using an Intel(R) Core(TM) i7-4790 CPU, and is ~65X faster than manual processing. Using images from rat renal tissue samples, automatic glomerular structural feature estimation was reproducibly demonstrated for 15 biopsy images, which contained 148 individual glomeruli images. The proposed method holds immense potential to enhance information available while making clinical diagnoses.

  6. An Analytical Model for Assessing Stability of Pre-Existing Faults in Caprock Caused by Fluid Injection and Extraction in a Reservoir

    Science.gov (United States)

    Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin

    2016-07-01

    Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.

  7. On- and off-fault coseismic surface deformation associated with the September 2013 M7.7 Balochistan, Pakistan earthquake measured from mapping and automated pixel correlation

    Science.gov (United States)

    Gold, R. D.; Reitman, N. G.; Briggs, R. W.; Barnhart, W. D.; Hayes, G. P.

    2014-12-01

    The 24 September 2013 Mw7.7 Balochistan, Pakistan earthquake ruptured a ~200 km-long stretch of the Hoshab fault in southern Pakistan. We remotely measured the coseismic surface deformation field using high-resolution (0.5 m) pre- and post-event satellite imagery. We measured ~300 near-field (0-10 m from fault) laterally offset piercing points (streams, terrace risers, roads, etc.) and find peak left-lateral offsets of ~12-15 m. We characterized the far-field (0-10 km from fault) displacement field using manual (~250 measurements) and automated image cross-correlation methods (e.g., pixel tracking) and find peak displacement values of ~16 m, which commonly exceed the on-fault displacement magnitudes. Our preliminary observations suggest the following: (1) coseismic surface displacement typically increases with distance away from the surface trace of the fault (e.g., highest displacement values in the far field), (2) for certain locations along the fault rupture, as little as 50% of the coseismic displacement field occurred in the near-field; and (3) the magnitudes of individual displacements are inversely correlated to the width of the surface rupture zone (e.g., largest displacements where the fault zone is narrowest). This analysis highlights the importance of identifying field study sites spanning fault sections with narrow deformation zones in order to capture the entire deformation field. For regions of distributed deformation, these results would predict that geologic slip rate studies underestimate a fault's complete slip rate.

  8. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    Science.gov (United States)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  9. An integrated approach for automating validation of extracted ion chromatographic peaks

    OpenAIRE

    Nelson, William D.; Viele, Kert; Lynn, Bert C.

    2008-01-01

    Summary: Accurate determination of extracted ion chromatographic peak areas in isotope-labeled quantitative proteomics is difficult to automate. Manual validation of identified peaks is typically required. We have integrated a peak confidence scoring algorithm into existing tools which are compatible with analysis pipelines based on the standards from the Institute for Systems Biology. This algorithm automatically excludes incorrectly identified peaks, improving the accuracy of the final prot...

  10. Feature extraction of rolling bearing’s early weak fault based on EEMD and tunable Q-factor wavelet transform

    Science.gov (United States)

    Wang, Hongchao; Chen, Jin; Dong, Guangming

    2014-10-01

    When early weak fault emerges in rolling bearing the fault feature is too weak to extract using the traditional fault diagnosis methods such as Fast Fourier Transform (FFT) and envelope demodulation. The tunable Q-factor wavelet transform (TQWT) is the improvement of traditional one single Q-factor wavelet transform, and it is very fit for separating the low Q-factor transient impact component from the high Q-factor sustained oscillation components when fault emerges in rolling bearing. However, it is hard to extract the rolling bearing’ early weak fault feature perfectly using the TQWT directly. Ensemble empirical mode decomposition (EEMD) is the improvement of empirical mode decomposition (EMD) which not only has the virtue of self-adaptability of EMD but also overcomes the mode mixing problem of EMD. The original signal of rolling bearing’ early weak fault is decomposed by EEMD and several intrinsic mode functions (IMFs) are obtained. Then the IMF with biggest kurtosis index value is selected and handled by the TQWT subsequently. At last, the envelope demodulation method is applied on the low Q-factor transient impact component and satisfactory extraction result is obtained.

  11. Automated road network extraction from high spatial resolution multi-spectral imagery

    Science.gov (United States)

    Zhang, Qiaoping

    For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a

  12. Analysis of Automated Modern Web Crawling and Testing Tools and Their Possible Employment for Information Extraction

    Directory of Open Access Journals (Sweden)

    Tomas Grigalis

    2012-04-01

    Full Text Available World Wide Web has become an enormously big repository of data. Extracting, integrating and reusing this kind of data has a wide range of applications, including meta-searching, comparison shopping, business intelligence tools and security analysis of information in websites. However, reaching information in modern WEB 2.0 web pages, where HTML tree is often dynamically modified by various JavaScript codes, new data are added by asynchronous requests to the web server and elements are positioned with the help of cascading style sheets, is a difficult task. The article reviews automated web testing tools for information extraction tasks.Article in Lithuanian

  13. Bearing Fault Diagnosis Based on Multiscale Permutation Entropy and Support Vector Machine

    OpenAIRE

    Jian-Jiun Ding; Chun-Chieh Wang; Chiu-Wen Wu; Po-Hung Wu; Shuen-De Wu

    2012-01-01

    Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, multiscale permutation entropy (MPE) was introduced for feature extraction from faulty bearing vibration signals. After extracting feature vectors by MPE, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. Simulation results demonstr...

  14. Adaptive Redundant Lifting Wavelet Transform Based on Fitting for Fault Feature Extraction of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2012-03-01

    Full Text Available A least square method based on data fitting is proposed to construct a new lifting wavelet, together with the nonlinear idea and redundant algorithm, the adaptive redundant lifting transform based on fitting is firstly stated in this paper. By variable combination selections of basis function, sample number and dimension of basis function, a total of nine wavelets with different characteristics are constructed, which are respectively adopted to perform redundant lifting wavelet transforms on low-frequency approximate signals at each layer. Then the normalized lP norms of the new node-signal obtained through decomposition are calculated to adaptively determine the optimal wavelet for the decomposed approximate signal. Next, the original signal is taken for subsection power spectrum analysis to choose the node-signal for single branch reconstruction and demodulation. Experiment signals and engineering signals are respectively used to verify the above method and the results show that bearing faults can be diagnosed more effectively by the method presented here than by both spectrum analysis and demodulation analysis. Meanwhile, compared with the symmetrical wavelets constructed with Lagrange interpolation algorithm, the asymmetrical wavelets constructed based on data fitting are more suitable in feature extraction of fault signal of roller bearings.

  15. An advanced distributed automated extraction of drainage network model on high-resolution DEM

    Directory of Open Access Journals (Sweden)

    Y. Mao

    2014-07-01

    distributed automated extraction of drainage network model (Adam was proposed in the study. The Adam model has two features: (1 searching upward from outlet of basin instead of sink filling, (2 dividing sub-basins on low-resolution DEM, and then extracting drainage network on sub-basins of high-resolution DEM. The case study used elevation data of the Shuttle Radar Topography Mission (SRTM at 3 arc-second resolution in Zhujiang River basin, China. The results show Adam model can dramatically reduce the computation time. The extracting drainage network was continuous and more accurate than HydroSHEDS (Hydrological data and maps based on Shuttle Elevation Derivatives at multiple Scales.

  16. A new method of distribution automation system transmitting fault information%配电自动化系统转发故障信息的新方法

    Institute of Scientific and Technical Information of China (English)

    张益嘉

    2015-01-01

    Nanning Power Supply Bureau of distribution automation system has been formally applied,the system has a cross-age signifi-cance of 10kV line fault location and isolation,but the time difference lag between the distribution automation systems and information ex-change to repair personnel, seriously affects the repair efficiency and effective using of the automation system.Forwarding fault information methodology in this paper is based on the distribution automation system forwarding interface, fast-forwarding the fault information directly to the repair personnel's mobile device port(mobile phone).It is important to shorten customer average interruption time and improve power supply reliability rate.%南宁供电局的配电自动化系统已正式应用,该系统对10kV线路故障定位和隔离具有跨时代意义,但由于配电自动化系统和抢修人员之间的信息交换存在时间差问题,严重影响抢修效率和自动化系统的使用成效。本文所研究的转发故障信息的方法是利用配电自动化系统转发接口,直接把故障信息快速转发给抢修人员移动设备端口(手机),对缩短用户平均停电时间,提高供电可靠率有重大作用。

  17. Extraction, identification, and functional characterization of a bioactive substance from automated compound-handling plastic tips.

    Science.gov (United States)

    Watson, John; Greenough, Emily B; Leet, John E; Ford, Michael J; Drexler, Dieter M; Belcastro, James V; Herbst, John J; Chatterjee, Moneesh; Banks, Martyn

    2009-06-01

    Disposable plastic labware is ubiquitous in contemporary pharmaceutical research laboratories. Plastic labware is routinely used for chemical compound storage and during automated liquid-handling processes that support assay development, high-throughput screening, structure-activity determinations, and liability profiling. However, there is little information available in the literature on the contaminants released from plastic labware upon DMSO exposure and their resultant effects on specific biological assays. The authors report here the extraction, by simple DMSO washing, of a biologically active substance from one particular size of disposable plastic tips used in automated compound handling. The active contaminant was identified as erucamide ((Z)-docos-13-enamide), a long-chain mono-unsaturated fatty acid amide commonly used in plastics manufacturing, by gas chromatography/mass spectroscopy analysis of the DMSO-extracted material. Tip extracts prepared in DMSO, as well as a commercially obtained sample of erucamide, were active in a functional bioassay of a known G-protein-coupled fatty acid receptor. A sample of a different disposable tip product from the same vendor did not release detectable erucamide following solvent extraction, and DMSO extracts prepared from this product were inactive in the receptor functional assay. These results demonstrate that solvent-extractable contaminants from some plastic labware used in the contemporary pharmaceutical research and development (R&D) environment can be introduced into physical and biological assays during routine compound management liquid-handling processes. These contaminants may further possess biological activity and are therefore a potential source of assay-specific confounding artifacts.

  18. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    Science.gov (United States)

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination.

  19. Dynamic electromembrane extraction: Automated movement of donor and acceptor phases to improve extraction efficiency.

    Science.gov (United States)

    Asl, Yousef Abdossalami; Yamini, Yadollah; Seidi, Shahram; Amanzadeh, Hatam

    2015-11-01

    In the present research, dynamic electromembrane extraction (DEME) was introduced for the first time for extraction and determination of ionizable species from different biological matrices. The setup proposed for DEME provides an efficient, stable, and reproducible method to increase extraction efficiency. This setup consists of a piece of hollow fiber mounted inside a glass flow cell by means of two plastics connector tubes. In this dynamic system, an organic solvent is impregnated into the pores of hollow fiber as supported liquid membrane (SLM); an aqueous acceptor solution is repeatedly pumped into the lumen of hollow fiber by a syringe pump whereas a peristaltic pump is used to move sample solution around the mounted hollow fiber into the flow cell. Two platinum electrodes connected to a power supply are used during extractions which are located into the lumen of the hollow fiber and glass flow cell, respectively. The method was applied for extraction of amitriptyline (AMI) and nortriptyline (NOR) as model analytes from biological fluids. Effective parameters on DEME of the model analytes were investigated and optimized. Under optimized conditions, the calibration curves were linear in the range of 2.0-100μgL(-1) with coefficient of determination (r(2)) more than 0.9902 for both of the analytes. The relative standard deviations (RSD %) were less than 8.4% based on four replicate measurements. LODs less than 1.0μgL(-1) were obtained for both AMI and NOR. The preconcentration factors higher than 83-fold were obtained for the extraction of AMI and NOR in various biological samples. PMID:26455283

  20. Automating the Extraction of Model-Based Software Product Lines from Model Variants

    OpenAIRE

    Martinez, Jabier; Ziadi, Tewfik; Klein, Jacques; Le Traon, Yves

    2015-01-01

    International audience We address the problem of automating 1) the analysis of existing similar model variants and 2) migrating them into a software product line. Our approach, named MoVa2PL, considers the identification of variability and commonality in model variants, as well as the extraction of a CVL-compliant Model-based Software Product Line (MSPL) from the features identified on these variants. MoVa2PL builds on a generic representation of models making it suitable to any MOF-based ...

  1. Field-scale validation of an automated soil nitrate extraction and measurement system

    OpenAIRE

    Sibley, K.J.; Astatkie, T.; Brewster, G.; Struik, P.C.; Adsett, J.F.; Pruski, K.

    2009-01-01

    One of the many gaps that needs to be solved by precision agriculture technologies is the availability of an economic, automated, on-the-go mapping system that can be used to obtain intensive and accurate ‘real-time’ data on the levels of nitrate nitrogen (NO3–N) in the soil. A soil nitrate mapping system (SNMS) has been developed to provide a way to collect such data. This study was done to provide extensive field-scale validation testing of the system’s nitrate extraction and measurement su...

  2. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    Energy Technology Data Exchange (ETDEWEB)

    JUDI, DAVID [Los Alamos National Laboratory; KALYANAPU, ALFRED [Los Alamos National Laboratory; MCPHERSON, TIMOTHY [Los Alamos National Laboratory; BERSCHEID, ALAN [Los Alamos National Laboratory

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  3. The BUME method: a novel automated chloroform-free 96-well total lipid extraction method for blood plasma[S

    OpenAIRE

    Löfgren, Lars; Ståhlman, Marcus; Forsberg, Gun-Britt; Saarinen, Sinikka; Nilsson, Ralf; Göran I Hansson

    2012-01-01

    Lipid extraction from biological samples is a critical and often tedious preanalytical step in lipid research. Primarily on the basis of automation criteria, we have developed the BUME method, a novel chloroform-free total lipid extraction method for blood plasma compatible with standard 96-well robots. In only 60 min, 96 samples can be automatically extracted with lipid profiles of commonly analyzed lipid classes almost identically and with absolute recoveries similar or better to what is ob...

  4. Evaluation of Automated and Manual Commercial DNA Extraction Methods for Recovery of Brucella DNA from Suspensions and Spiked Swabs ▿

    OpenAIRE

    Dauphin, Leslie A.; Hutchins, Rebecca J.; Bost, Liberty A.; Bowen, Michael D.

    2009-01-01

    This study evaluated automated and manual commercial DNA extraction methods for their ability to recover DNA from Brucella species in phosphate-buffered saline (PBS) suspension and from spiked swab specimens. Six extraction methods, representing several of the methodologies which are commercially available for DNA extraction, as well as representing various throughput capacities, were evaluated: the MagNA Pure Compact and the MagNA Pure LC instruments, the IT 1-2-3 DNA sample purification kit...

  5. An energy minimization approach to automated extraction of regular building footprints from airborne LiDAR data

    OpenAIRE

    He, Y; Zhang, C; Fraser, C. S.

    2014-01-01

    This paper presents an automated approach to the extraction of building footprints from airborne LiDAR data based on energy minimization. Automated 3D building reconstruction in complex urban scenes has been a long-standing challenge in photogrammetry and computer vision. Building footprints constitute a fundamental component of a 3D building model and they are useful for a variety of applications. Airborne LiDAR provides large-scale elevation representation of urban scene and as suc...

  6. High dimension feature extraction based visualized SOM fault diagnosis method and its application in p-xylene oxidation process☆

    Institute of Scientific and Technical Information of China (English)

    Ying Tian; Wenli Du; Feng Qian

    2015-01-01

    Purified terephthalic acid (PTA) is an important chemical raw material. P-xylene (PX) is transformed to terephthalic acid (TA) through oxidation process and TA is refined to produce PTA. The PX oxidation reaction is a complex process involving three-phase reaction of gas, liquid and solid. To monitor the process and to im-prove the product quality, as wel as to visualize the fault type clearly, a fault diagnosis method based on self-organizing map (SOM) and high dimensional feature extraction method, local tangent space alignment (LTSA), is proposed. In this method, LTSA can reduce the dimension and keep the topology information simultaneously, and SOM distinguishes various states on the output map. Monitoring results of PX oxidation reaction process in-dicate that the LTSA–SOM can wel detect and visualize the fault type.

  7. Automated extraction of DNA and PCR setup using a Tecan Freedom EVO® liquid handler

    DEFF Research Database (Denmark)

    Frøslev, Tobias Guldberg; Hansen, Anders Johannes; Stangegaard, Michael;

    We have implemented and validated automated protocols for DNA extraction and PCR setup using a Tecan Freedom EVO® liquid handler mounted with the TeMagS magnetic separation device. The methods were validated for accredited, forensic genetic work according to ISO 17025 using the Qiagen Mag......Attract® DNA Mini M48 kit from fresh, whole blood and blood from deceased. The methods were simplified by returning the DNA extracts to the original tubes reducing the risk of misplacing samples. The original tubes that had contained the samples were washed with 700 µl Milli-Q water prior to the return...... of the DNA extracts. The PCR setup protocols were designed for 96 well microtiter plates. The methods were validated for the kits: AmpFlSTR® Identifiler® and Y-filer® (Applied Biosystems), GenePrint® FFFL and PowerPlex® Y (Promega). Within 3.5 hours, 96 samples were extracted and PCR master mix was added...

  8. Automated Extraction of the Archaeological Tops of Qanat Shafts from VHR Imagery in Google Earth

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2014-12-01

    Full Text Available Qanats in northern Xinjiang of China provide valuable information for agriculturists and anthropologists who seek fundamental understanding of the distribution of qanat water supply systems with regard to water resource utilization, the development of oasis agriculture, and eventually climate change. Only the tops of qanat shafts (TQSs, indicating the course of the qanats, can be observed from space, and their circular archaeological traces can also be seen in very high resolution imagery in Google Earth. The small size of the TQSs, vast search regions, and degraded features make manually extracting them from remote sensing images difficult and costly. This paper proposes an automated TQS extraction method that adopts mathematical morphological processing methods before an edge detecting module is used in the circular Hough transform approach. The accuracy assessment criteria for the proposed method include: (i extraction percentage (E = 95.9%, branch factor (B = 0 and quality percentage (Q = 95.9% in Site 1; and (ii extraction percentage (E = 83.4%, branch factor (B = 0.058 and quality percentage (Q = 79.5% in Site 2. Compared with the standard circular Hough transform, the quality percentages (Q of our proposed method were improved to 95.9% and 79.5% from 86.3% and 65.8% in test sites 1 and 2, respectively. The results demonstrate that wide-area discovery and mapping can be performed much more effectively based on our proposed method.

  9. Automated Extraction Of Associations Between Methylated Genes and Diseases From Biomedical Literature

    KAUST Repository

    Bin Res, Arwa A.

    2012-12-01

    Associations between methylated genes and diseases have been investigated in several studies, and it is critical to have such information available for better understanding of diseases and clinical decisions. However, such information is scattered in a large number of electronic publications and it is difficult to manually search for it. Therefore, the goal of the project is to develop a machine learning model that can efficiently extract such information. Twelve machine learning algorithms were applied and compared in application to this problem based on three approaches that involve: document-term frequency matrices, position weight matrices, and a hybrid approach that uses the combination of the previous two. The best results we obtained by the hybrid approach with a random forest model that, in a 10-fold cross-validation, achieved F-score and accuracy of nearly 85% and 84%, respectively. On a completely separate testing set, F-score and accuracy of 89% and 88%, respectively, were obtained. Based on this model, we developed a tool that automates extraction of associations between methylated genes and diseases from electronic text. Our study contributed an efficient method for extracting specific types of associations from free text and the methodology developed here can be extended to other similar association extraction problems.

  10. Automating the Extraction of Metadata from Archaeological Data Using iRods Rules

    Directory of Open Access Journals (Sweden)

    David Walling

    2011-10-01

    Full Text Available The Texas Advanced Computing Center and the Institute for Classical Archaeology at the University of Texas at Austin developed a method that uses iRods rules and a Jython script to automate the extraction of metadata from digital archaeological data. The first step was to create a record-keeping system to classify the data. The record-keeping system employs file and directory hierarchy naming conventions designed specifically to maintain the relationship between the data objects and map the archaeological documentation process. The metadata implicit in the record-keeping system is automatically extracted upon ingest, combined with additional sources of metadata, and stored alongside the data in the iRods preservation environment. This method enables a more organized workflow for the researchers, helps them archive their data close to the moment of data creation, and avoids error prone manual metadata input. We describe the types of metadata extracted and provide technical details of the extraction process and storage of the data and metadata.

  11. Development and validation of an automated unit for the extraction of radiocaesium from seawater.

    Science.gov (United States)

    Bokor, Ilonka; Sdraulig, Sandra; Jenkinson, Peter; Madamperuma, Janaka; Martin, Paul

    2016-01-01

    An automated unit was developed for the in-situ extraction of radiocaesium ((137)Cs and (134)Cs) from large volumes of seawater to achieve very low detection limits. The unit was designed for monitoring of Australian ocean and coastal waters, including at ports visited by nuclear-powered warships. The unit is housed within a robust case, and is easily transported and operated. It contains four filter cartridges connected in series. The first two cartridges are used to remove any suspended material that may be present in the seawater, while the last two cartridges are coated with potassium copper hexacyanoferrate for caesium extraction. Once the extraction is completed the coated cartridges are ashed. The ash is transferred to a small petri dish for counting of (137)Cs and (134)Cs by high resolution gamma spectrometry for a minimum of 24 h. The extraction method was validated for the following criteria: selectivity, trueness, precision, linearity, limit of detection and traceability. The validation showed the unit to be fit for purpose with the method capable of achieving low detection limits required for environmental samples. The results for the environmental measurements in Australian seawater correlate well with those reported in the Worldwide Marine Radioactivity Study (WOMARS). The cost of preparation and running the system is low and waste generation is minimal. PMID:26330020

  12. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    Directory of Open Access Journals (Sweden)

    Xiao Yu

    2015-11-01

    Full Text Available Because roller element bearings (REBs failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT. In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS into window spectrums, following which Rand Index (RI criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs. Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines. The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU. The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault

  13. Automated multisyringe stir bar sorptive extraction using robust montmorillonite/epoxy-coated stir bars.

    Science.gov (United States)

    Ghani, Milad; Saraji, Mohammad; Maya, Fernando; Cerdà, Víctor

    2016-05-01

    Herein we present a simple, rapid and low cost strategy for the preparation of robust stir bar coatings based on the combination of montmorillonite with epoxy resin. The composite stir bar was implemented in a novel automated multisyringe stir bar sorptive extraction system (MS-SBSE), and applied to the extraction of four chlorophenols (4-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol and pentachlorophenol) as model compounds, followed by high performance liquid chromatography-diode array detection. The different experimental parameters of the MS-SBSE, such as sample volume, selection of the desorption solvent, desorption volume, desorption time, sample solution pH, salt effect and extraction time were studied. Under the optimum conditions, the detection limits were between 0.02 and 0.34μgL(-1). Relative standard deviations (RSD) of the method for the analytes at 10μgL(-1) concentration level ranged from 3.5% to 4.1% (as intra-day RSD) and from 3.9% to 4.3% (as inter-day RSD at 50μgL(-1) concentration level). Batch-to-batch reproducibility for three different stir bars was 4.6-5.1%. The enrichment factors were between 30 and 49. In order to investigate the capability of the developed technique for real sample analysis, well water, wastewater and leachates from a solid waste treatment plant were satisfactorily analyzed. PMID:27062720

  14. Automated multisyringe stir bar sorptive extraction using robust montmorillonite/epoxy-coated stir bars.

    Science.gov (United States)

    Ghani, Milad; Saraji, Mohammad; Maya, Fernando; Cerdà, Víctor

    2016-05-01

    Herein we present a simple, rapid and low cost strategy for the preparation of robust stir bar coatings based on the combination of montmorillonite with epoxy resin. The composite stir bar was implemented in a novel automated multisyringe stir bar sorptive extraction system (MS-SBSE), and applied to the extraction of four chlorophenols (4-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol and pentachlorophenol) as model compounds, followed by high performance liquid chromatography-diode array detection. The different experimental parameters of the MS-SBSE, such as sample volume, selection of the desorption solvent, desorption volume, desorption time, sample solution pH, salt effect and extraction time were studied. Under the optimum conditions, the detection limits were between 0.02 and 0.34μgL(-1). Relative standard deviations (RSD) of the method for the analytes at 10μgL(-1) concentration level ranged from 3.5% to 4.1% (as intra-day RSD) and from 3.9% to 4.3% (as inter-day RSD at 50μgL(-1) concentration level). Batch-to-batch reproducibility for three different stir bars was 4.6-5.1%. The enrichment factors were between 30 and 49. In order to investigate the capability of the developed technique for real sample analysis, well water, wastewater and leachates from a solid waste treatment plant were satisfactorily analyzed.

  15. Automated extraction and classification of time-frequency contours in humpback vocalizations.

    Science.gov (United States)

    Ou, Hui; Au, Whitlow W L; Zurk, Lisa M; Lammers, Marc O

    2013-01-01

    A time-frequency contour extraction and classification algorithm was created to analyze humpback whale vocalizations. The algorithm automatically extracted contours of whale vocalization units by searching for gray-level discontinuities in the spectrogram images. The unit-to-unit similarity was quantified by cross-correlating the contour lines. A library of distinctive humpback units was then generated by applying an unsupervised, cluster-based learning algorithm. The purpose of this study was to provide a fast and automated feature selection tool to describe the vocal signatures of animal groups. This approach could benefit a variety of applications such as species description, identification, and evolution of song structures. The algorithm was tested on humpback whale song data recorded at various locations in Hawaii from 2002 to 2003. Results presented in this paper showed low probability of false alarm (0%-4%) under noisy environments with small boat vessels and snapping shrimp. The classification algorithm was tested on a controlled set of 30 units forming six unit types, and all the units were correctly classified. In a case study on humpback data collected in the Auau Chanel, Hawaii, in 2002, the algorithm extracted 951 units, which were classified into 12 distinctive types. PMID:23297903

  16. Feature extraction and recognition for rolling element bearing fault utilizing short-time Fourier transform and non-negative matrix factorization

    Science.gov (United States)

    Gao, Huizhong; Liang, Lin; Chen, Xiaoguang; Xu, Guanghua

    2015-01-01

    Due to the non-stationary characteristics of vibration signals acquired from rolling element bearing fault, the time-frequency analysis is often applied to describe the local information of these unstable signals smartly. However, it is difficult to classify the high dimensional feature matrix directly because of too large dimensions for many classifiers. This paper combines the concepts of time-frequency distribution(TFD) with non-negative matrix factorization(NMF), and proposes a novel TFD matrix factorization method to enhance representation and identification of bearing fault. Throughout this method, the TFD of a vibration signal is firstly accomplished to describe the localized faults with short-time Fourier transform(STFT). Then, the supervised NMF mapping is adopted to extract the fault features from TFD. Meanwhile, the fault samples can be clustered and recognized automatically by using the clustering property of NMF. The proposed method takes advantages of the NMF in the parts-based representation and the adaptive clustering. The localized fault features of interest can be extracted as well. To evaluate the performance of the proposed method, the 9 kinds of the bearing fault on a test bench is performed. The proposed method can effectively identify the fault severity and different fault types. Moreover, in comparison with the artificial neural network(ANN), NMF yields 99.3% mean accuracy which is much superior to ANN. This research presents a simple and practical resolution for the fault diagnosis problem of rolling element bearing in high dimensional feature space.

  17. Feature Extraction and Recognition for Rolling Element Bearing Fault Utilizing Short-Time Fourier Transform and Non-negative Matrix Factorization

    Institute of Scientific and Technical Information of China (English)

    GAO Huizhong; LIANG Lin; CHEN Xiaoguang; XU Guanghua

    2015-01-01

    Due to the non-stationary characteristics of vibration signals acquired from rolling element bearing fault, the time-frequency analysis is often applied to describe the local information of these unstable signals smartly. However, it is difficult to classify the high dimensional feature matrix directly because of too large dimensions for many classifiers. This paper combines the concepts of time-frequency distribution(TFD) with non-negative matrix factorization(NMF), and proposes a novel TFD matrix factorization method to enhance representation and identification of bearing fault. Throughout this method, the TFD of a vibration signal is firstly accomplished to describe the localized faults with short-time Fourier transform(STFT). Then, the supervised NMF mapping is adopted to extract the fault features from TFD. Meanwhile, the fault samples can be clustered and recognized automatically by using the clustering property of NMF. The proposed method takes advantages of the NMF in the parts-based representation and the adaptive clustering. The localized fault features of interest can be extracted as well. To evaluate the performance of the proposed method, the 9 kinds of the bearing fault on a test bench is performed. The proposed method can effectively identify the fault severity and different fault types. Moreover, in comparison with the artificial neural network(ANN), NMF yields 99.3%mean accuracy which is much superior to ANN. This research presents a simple and practical resolution for the fault diagnosis problem of rolling element bearing in high dimensional feature space.

  18. Automated DEM extraction in digital aerial photogrammetry: precisions and validation for mass movement monitoring

    Directory of Open Access Journals (Sweden)

    A. Pesci

    2005-06-01

    Full Text Available Automated procedures for photogrammetric image processing and Digital Elevation Models (DEM extraction yield high precision terrain models in a short time, reducing manual editing; their accuracy is strictly related to image quality and terrain features. After an analysis of the performance of the Digital Photogrammetric Workstation (DPW 770 Helava, the paper compares DEMs derived from different surveys and registered in the same reference system. In the case of stable area, the distribution of height residuals, their mean and standard deviation values, indicate that the theoretical accuracy is achievable automatically when terrain is characterized by regular morphology. Steep slopes, corrugated surfaces, vegetation and shadows can degrade results even if manual editing procedures are applied. The comparison of multi-temporal DEMs on unstable areas allows the monitoring of surface deformation and morphological changes.

  19. Sequential Chomospheric Brightening: An Automated Approach to Extracting Physics from Ephemeral Brightening

    CERN Document Server

    Kirk, Michael S; Jackiewicz, Jason; McAteer, R T James; McNamara, Bernie J

    2012-01-01

    We make a comparison between small scale chromospheric brightenings and energy release processes through examining the temporal evolution of sequential chromospheric brightenings (SCBs), derive propagation velocities, and propose a connection of the small-scale features to solar flares. Our automated routine detects and distinguishes three separate types of brightening regularly observed in the chromosphere: plage, flare ribbon, and point brightenings. By studying their distinct dynamics, we separate out the flare-associated bright points commonly known as SCBs and identify a propagating Moreton wave. Superimposing our detections on complementary off-band images, we extract a Doppler velocity measurement beneath the point brightening locations. Using these dynamic measurements, we put forward a connection between point brightenings, the erupting flare, and overarching magnetic loops. A destabilization of the pre-flare loop topology by the erupting flare directly leads to the SCBs observed.

  20. Automated data extraction from in situ protein stable isotope probing studies

    Energy Technology Data Exchange (ETDEWEB)

    Slysz, Gordon W.; Steinke, Laurey A.; Ward, David M.; Klatt, Christian G.; Clauss, Therese RW; Purvine, Samuel O.; Payne, Samuel H.; Anderson, Gordon A.; Smith, Richard D.; Lipton, Mary S.

    2014-01-27

    Protein stable isotope probing (protein-SIP) has strong potential for revealing key metabolizing taxa in complex microbial communities. While most protein-SIP work to date has been performed under controlled laboratory conditions to allow extensive isotope labeling of the target organism, a key application will be in situ studies of microbial communities under conditions that result in small degrees of partial labeling. One hurdle restricting large scale in situ protein-SIP studies is the lack of algorithms and software for automated data processing of the massive data sets resulting from such studies. In response, we developed Stable Isotope Probing Protein Extraction Resources software (SIPPER) and applied it for large scale extraction and visualization of data from short term (3 h) protein-SIP experiments performed in situ on Yellowstone phototrophic bacterial mats. Several metrics incorporated into the software allow it to support exhaustive analysis of the complex composite isotopic envelope observed as a result of low amounts of partial label incorporation. SIPPER also enables the detection of labeled molecular species without the need for any prior identification.

  1. Streamlining DNA barcoding protocols: automated DNA extraction and a new cox1 primer in arachnid systematics.

    Directory of Open Access Journals (Sweden)

    Nina Vidergar

    Full Text Available BACKGROUND: DNA barcoding is a popular tool in taxonomic and phylogenetic studies, but for most animal lineages protocols for obtaining the barcoding sequences--mitochondrial cytochrome C oxidase subunit I (cox1 AKA CO1--are not standardized. Our aim was to explore an optimal strategy for arachnids, focusing on the species-richest lineage, spiders by (1 improving an automated DNA extraction protocol, (2 testing the performance of commonly used primer combinations, and (3 developing a new cox1 primer suitable for more efficient alignment and phylogenetic analyses. METHODOLOGY: We used exemplars of 15 species from all major spider clades, processed a range of spider tissues of varying size and quality, optimized genomic DNA extraction using the MagMAX Express magnetic particle processor-an automated high throughput DNA extraction system-and tested cox1 amplification protocols emphasizing the standard barcoding region using ten routinely employed primer pairs. RESULTS: The best results were obtained with the commonly used Folmer primers (LCO1490/HCO2198 that capture the standard barcode region, and with the C1-J-2183/C1-N-2776 primer pair that amplifies its extension. However, C1-J-2183 is designed too close to HCO2198 for well-interpreted, continuous sequence data, and in practice the resulting sequences from the two primer pairs rarely overlap. We therefore designed a new forward primer C1-J-2123 60 base pairs upstream of the C1-J-2183 binding site. The success rate of this new primer (93% matched that of C1-J-2183. CONCLUSIONS: The use of C1-J-2123 allows full, indel-free overlap of sequences obtained with the standard Folmer primers and with C1-J-2123 primer pair. Our preliminary tests suggest that in addition to spiders, C1-J-2123 will also perform in other arachnids and several other invertebrates. We provide optimal PCR protocols for these primer sets, and recommend using them for systematic efforts beyond DNA barcoding.

  2. Development of an automated sequential injection on-line solvent extraction-back extraction procedure as demonstrated for the determination of cadmium with detection by electrothermal atomic absorption spectrometry

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2002-01-01

    An automated sequential injection (SI) on-line solvent extraction-back extraction separation/preconcentration procedure is described. Demonstrated for the assay of cadmium by electrothermal atomic absorption spectrometry (ETAAS), the analyte is initially complexed with ammonium...

  3. A Wavelet Based Multiscale Weighted Permutation Entropy Method for Sensor Fault Feature Extraction and Identification

    OpenAIRE

    Qiaoning Yang; Jianlin Wang

    2016-01-01

    Sensor is the core module in signal perception and measurement applications. Due to the harsh external environment, aging, and so forth, sensor easily causes failure and unreliability. In this paper, three kinds of common faults of single sensor, bias, drift, and stuck-at, are investigated. And a fault diagnosis method based on wavelet permutation entropy is proposed. It takes advantage of the multiresolution ability of wavelet and the internal structure complexity measure of permutation entr...

  4. Automated Extraction and Mapping for Desert Wadis from Landsat Imagery in Arid West Asia

    Directory of Open Access Journals (Sweden)

    Yongxue Liu

    2016-03-01

    Full Text Available Wadis, ephemeral dry rivers in arid desert regions that contain water in the rainy season, are often manifested as braided linear channels and are of vital importance for local hydrological environments and regional hydrological management. Conventional methods for effectively delineating wadis from heterogeneous backgrounds are limited for the following reasons: (1 the occurrence of numerous morphological irregularities which disqualify methods based on physical shape; (2 inconspicuous spectral contrast with backgrounds, resulting in frequent false alarms; and (3 the extreme complexity of wadi systems, with numerous tiny tributaries characterized by spectral anisotropy, resulting in a conflict between global and local accuracy. To overcome these difficulties, an automated method for extracting wadis (AMEW from Landsat-8 Operational Land Imagery (OLI was developed in order to take advantage of the complementarity between Water Indices (WIs, which is a technique of mathematically combining different bands to enhance water bodies and suppress backgrounds, and image processing technologies in the morphological field involving multi-scale Gaussian matched filtering and a local adaptive threshold segmentation. Evaluation of the AMEW was carried out in representative areas deliberately selected from Jordan, SW Arabian Peninsula in order to ensure a rigorous assessment. Experimental results indicate that the AMEW achieved considerably higher accuracy than other effective extraction methods in terms of visual inspection and statistical comparison, with an overall accuracy of up to 95.05% for the entire area. In addition, the AMEW (based on the New Water Index (NWI achieved higher accuracy than other methods (the maximum likelihood classifier and the support vector machine classifier used for bulk wadi extraction.

  5. Californian demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data

    Science.gov (United States)

    Yan, L.; Roy, D. P.

    2013-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning. To date, field objects have not been extracted from satellite data over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. We present a fully automated computational methodology to extract agricultural fields from 30m Web Enabled Landsat data (WELD) time series and results for approximately 250,000 square kilometers (eleven 150 x 150 km WELD tiles) encompassing all the major agricultural areas of California. The extracted fields, including rectangular, circular, and irregularly shaped fields, are evaluated by comparison with manually interpreted Landsat field objects. Validation results are presented in terms of standard confusion matrix accuracy measures and also the degree of field object over-segmentation, under-segmentation, fragmentation and shape distortion. The apparent success of the presented field extraction methodology is due to several factors. First, the use of multi-temporal Landsat data, as opposed to single Landsat acquisitions, that enables crop rotations and inter-annual variability in the state of the vegetation to be accommodated for and provides more opportunities for cloud-free, non-missing and atmospherically uncontaminated surface observations. Second, the adoption of an object based approach, namely the variational region-based geometric active contour method that enables robust segmentation with only a small number of parameters and that requires no training data collection. Third, the use of a watershed algorithm to decompose connected segments belonging to multiple fields into coherent isolated field segments and a geometry based algorithm to detect and associate parts of

  6. Semi-automated procedures for shoreline extraction using single RADARSAT-1 SAR image

    Science.gov (United States)

    Al Fugura, A.'kif; Billa, Lawal; Pradhan, Biswajeet

    2011-12-01

    Coastline identification is important for surveying and mapping reasons. Coastline serves as the basic point of reference and is used on nautical charts for navigation purposes. Its delineation has become crucial and more important in the wake of the many recent earthquakes and tsunamis resulting in complete change and redraw of some shorelines. In a tropical country like Malaysia, presence of cloud cover hinders the application of optical remote sensing data. In this study a semi-automated technique and procedures are presented for shoreline delineation from RADARSAT-1 image. A scene of RADARSAT-1 satellite image was processed using enhanced filtering technique to identify and extract the shoreline coast of Kuala Terengganu, Malaysia. RADSARSAT image has many advantages over the optical data because of its ability to penetrate cloud cover and its night sensing capabilities. At first, speckles were removed from the image by using Lee sigma filter which was used to reduce random noise and to enhance the image and discriminate the boundary between land and water. The results showed an accurate and improved extraction and delineation of the entire coastline of Kuala Terrenganu. The study demonstrated the reliability of the image averaging filter in reducing random noise over the sea surface especially near the shoreline. It enhanced land-water boundary differentiation, enabling better delineation of the shoreline. Overall, the developed techniques showed the potential of radar imagery for accurate shoreline mapping and will be useful for monitoring shoreline changes during high and low tides as well as shoreline erosion in a tropical country like Malaysia.

  7. Automated centreline extraction of neuronal dendrite from optical microscopy image stacks

    Science.gov (United States)

    Xiao, Liang; Zhang, Fanbiao

    2010-11-01

    In this work we present a novel vision-based pipeline for automated skeleton detection and centreline extraction of neuronal dendrite from optical microscopy image stacks. The proposed pipeline is an integrated solution that merges image stacks pre-processing, the seed points detection, ridge traversal procedure, minimum spanning tree optimization and tree trimming into to a unified framework to deal with the challenge problem. In image stacks preprocessing, we first apply a curvelet transform based shrinkage and cycle spinning technique to remove the noise. This is followed by the adaptive threshold method to compute the result of neuronal object segmentation, and the 3D distance transformation is performed to get the distance map. According to the eigenvalues and eigenvectors of the Hessian matrix, the skeleton seed points are detected. Staring from the seed points, the initial centrelines are obtained using ridge traversal procedure. After that, we use minimum spanning tree to organize the geometrical structure of the skeleton points, and then we use graph trimming post-processing to compute the final centreline. Experimental results on different datasets demonstrate that our approach has high reliability, good robustness and requires less user interaction.

  8. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    Science.gov (United States)

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  9. Modeling interseismic deformation field of North Tehran Fault extracted from precise leveling observation

    Science.gov (United States)

    Amighpey, Masoome; Voosoghi, Behzad; Arabi, Siyavash

    2016-06-01

    The North Tehran Fault (NTF) stands out as a major active thrust fault running for approximately 110 km north of Tehran, the capital province of Iran. It has been the source of several major historical earthquakes in the past, including those in 958, 1665, and 1830. In this paper, interseismic strain accumulation on the NFT was investigated using precise leveling measurements obtained over the time frame 1997-2005. The relationship between surface deformation field and interseismic deformation models was evaluated using simulated annealing optimization in a Bayesian framework. The results show that the NTF fault follows an elastic dislocation model creep at a rate of 2.5 ± 0.06 mm/year in the eastern part and 6.2 ± 0.04 mm/year in the western part. Moreover, the locking depth of the fault was evaluated to be ± 1.1 km in the eastern part and 1.3 ± 0.2 km in the western part.

  10. Feature Extraction Method for High Impedance Ground Fault Localization in Radial Power Distribution Networks

    DEFF Research Database (Denmark)

    Jensen, Kåre Jean; Munk, Steen M.; Sørensen, John Aasted

    1998-01-01

    A new approach to the localization of high impedance ground faults in compensated radial power distribution networks is presented. The total size of such networks is often very large and a major part of the monitoring of these is carried out manually. The increasing complexity of industrial...

  11. Rapid and Semi-Automated Extraction of Neuronal Cell Bodies and Nuclei from Electron Microscopy Image Stacks

    Science.gov (United States)

    Holcomb, Paul S.; Morehead, Michael; Doretto, Gianfranco; Chen, Peter; Berg, Stuart; Plaza, Stephen; Spirou, George

    2016-01-01

    Connectomics—the study of how neurons wire together in the brain—is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes. PMID:27259933

  12. An Automated Graphical User Interface based System for the Extraction of Retinal Blood Vessels using Kirsch’s Template

    OpenAIRE

    Joshita Majumdar; Souvik Tewary; Shreyosi Chakraborty; Debasish Kundu; Sudipta Ghosh; Sauvik Das Gupta

    2015-01-01

    The assessment of Blood Vessel networks plays an important role in a variety of medical disorders. The diagnosis of Diabetic Retinopathy (DR) and its repercussions including micro aneurysms, haemorrhages, hard exudates and cotton wool spots is one such field. This study aims to develop an automated system for the extraction of blood vessels from retinal images by employing Kirsch’s Templates in a MATLAB based Graphical User Interface (GUI). Here, a RGB or Grey image of the retina (Fundus Phot...

  13. Extraction of Citrus Hystrix D.C. (Kaffir Lime) Essential Oil Using Automated Steam Distillation Process: Analysis of Volatile Compounds

    International Nuclear Information System (INIS)

    An automated steam distillation was successfully used to extract volatiles from Citrus hystrix D.C (Kaffir lime) peels. The automated steam distillation integrated with robust temperature control can commercially produce large amount of essential oil with efficient heating system. Objective of this study is to quantify the oil production rate using automated steam distillation and analyze the composition of volatiles in Kaffir lime peels oil at different controlled and uncontrolled temperature conditions. From the experimentation, oil extraction from Kaffir lime peels only took approximately less than 3 hours with amount of oil yield was 13.4 % more than uncontrolled temperature. The identified major compounds from Kaffir lime peels oil were sabinene, β-pinene, limonene, α-pinene, camphene, myrcene, terpinen-4-ol, α-terpineol, linalool, terpinolene and citronellal which are considered to have good organoleptic quality. In contrast with uncontrolled temperature, oil analysis revealed that some important volatile compounds were absent such as terpinolene, linalool, terpinen-4-ol due to thermal degradation effect from fast heating of extracted material. (author)

  14. ESR studies on quartz extracted from shallow fault gouges related to the ms 8.0 Wenchuan earthquake - China - implications for ESR signal resetting in quaternary faults

    OpenAIRE

    Liu, Chun-Ru; Yin, Gong-Ming; Zhou, Yong-Sheng; Gao, Lu; Han, Fei; Li, Jian-ping

    2014-01-01

    ESR dating of the most recent fault activity through quartz signal measurement is based on the assumption that the ESR signal experienced zero resetting during the faulting event. However, several laboratory experiments implied that only partial zeroing of quartz ESR signals was possible. In order to verify whether the signal resetting could be complete under natural conditions, we analyzed quartz recovered from fault gouges after the 2008 Ms 8.0 Wenchuan earthquake. The quartz E’ and Al cent...

  15. Comparative evaluation of commercially available manual and automated nucleic acid extraction methods for rotavirus RNA detection in stools.

    Science.gov (United States)

    Esona, Mathew D; McDonald, Sharla; Kamili, Shifaq; Kerin, Tara; Gautam, Rashi; Bowen, Michael D

    2013-12-01

    Rotaviruses are a major cause of viral gastroenteritis in children. For accurate and sensitive detection of rotavirus RNA from stool samples by reverse transcription-polymerase chain reaction (RT-PCR), the extraction process must be robust. However, some extraction methods may not remove the strong RT-PCR inhibitors known to be present in stool samples. The objective of this study was to evaluate and compare the performance of six extraction methods used commonly for extraction of rotavirus RNA from stool, which have never been formally evaluated: the MagNA Pure Compact, KingFisher Flex and NucliSENS easyMAG instruments, the NucliSENS miniMAG semi-automated system, and two manual purification kits, the QIAamp Viral RNA kit and a modified RNaid kit. Using each method, total nucleic acid or RNA was extracted from eight rotavirus-positive stool samples with enzyme immunoassay optical density (EIA OD) values ranging from 0.176 to 3.098. Extracts prepared using the MagNA Pure Compact instrument yielded the most consistent results by qRT-PCR and conventional RT-PCR. When extracts prepared from a dilution series were extracted by the 6 methods and tested, rotavirus RNA was detected in all samples by qRT-PCR but by conventional RT-PCR testing, only the MagNA Pure Compact and KingFisher Flex extracts were positive in all cases. RT-PCR inhibitors were detected in extracts produced with the QIAamp Viral RNA Mini kit. The findings of this study should prove useful for selection of extraction methods to be incorporated into future rotavirus detection and genotyping protocols. PMID:24036075

  16. Fully Automated Electro Membrane Extraction Autosampler for LC-MS Systems Allowing Soft Extractions for High-Throughput Applications

    DEFF Research Database (Denmark)

    Fuchs, David; Pedersen-Bjergaard, Stig; Jensen, Henrik;

    2016-01-01

    , analytes were extracted into the lumen of the extraction probe and transferred to a LC-MS system for further analysis. Various parameters affecting extraction efficacy were investigated including syringe fill strokes, syringe pull up volume, pull up delay and volume in the sample vial. The system...

  17. Mixed-mode isolation of triazine metabolites from soil and aquifer sediments using automated solid-phase extraction

    Science.gov (United States)

    Mills, M.S.; Thurman, E.M.

    1992-01-01

    Reversed-phase isolation and ion-exchange purification were combined in the automated solid-phase extraction of two polar s-triazine metabolites, 2-amino-4-chloro-6-(isopropylamino)-s-triazine (deethylatrazine) and 2-amino-4-chloro-6-(ethylamino)-s-triazine (deisopropylatrazine) from clay-loam and slit-loam soils and sandy aquifer sediments. First, methanol/ water (4/1, v/v) soil extracts were transferred to an automated workstation following evaporation of the methanol phase for the rapid reversed-phase isolation of the metabolites on an octadecylresin (C18). The retention of the triazine metabolites on C18 decreased substantially when trace methanol concentrations (1%) remained. Furthermore, the retention on C18 increased with decreasing aqueous solubility and increasing alkyl-chain length of the metabolites and parent herbicides, indicating a reversed-phase interaction. The analytes were eluted with ethyl acetate, which left much of the soil organic-matter impurities on the resin. Second, the small-volume organic eluate was purified on an anion-exchange resin (0.5 mL/min) to extract the remaining soil pigments that could foul the ion source of the GC/MS system. Recoveries of the analytes were 75%, using deuterated atrazine as a surrogate, and were comparable to recoveries by soxhlet extraction. The detection limit was 0.1 ??g/kg with a coefficient of variation of 15%. The ease and efficiency of this automated method makes it viable, practical technique for studying triazine metabolites in the environment.

  18. The ValleyMorph Tool: An automated extraction tool for transverse topographic symmetry (T-) factor and valley width to valley height (Vf-) ratio

    Science.gov (United States)

    Daxberger, Heidi; Dalumpines, Ron; Scott, Darren M.; Riller, Ulrich

    2014-09-01

    In tectonically active regions on Earth, shallow-crustal deformation associated with seismic hazards may pose a threat to human life and property. The study of landform development, such as analysis of the valley width to valley height ratio (Vf-ratio) and the Transverse Topographic Symmetry Factor (T-factor), delineating drainage basin symmetry, can be used as a relative measure of tectonic activity along fault-bound mountain fronts. The fast evolution of digital elevation models (DEM) provides an ideal base for remotely-sensed tectonomorphic studies of large areas using Geographical Information Systems (GIS). However, a manual extraction of the above mentioned morphologic parameters may be tedious and very time consuming. Moreover, basic GIS software suites do not provide the necessary built-in functions. Therefore, we present a newly developed, Python based, ESRI ArcGIS compatible tool and stand-alone script, the ValleyMorph Tool. This tool facilitates an automated extraction of the Vf-ratio and the T-factor data for large regions. Using a digital elevation raster and watershed polygon files as input, the tool provides output in the form of several ArcGIS data tables and shapefiles, ideal for further data manipulation and computation. This coding enables an easy application among the ArcGIS user community and code conversion to earlier ArcGIS versions. The ValleyMorph Tool is easy to use due to a simple graphical user interface. The tool is tested for the southern Central Andes using a total of 3366 watersheds.

  19. Time-frequency manifold for nonlinear feature extraction in machinery fault diagnosis

    Science.gov (United States)

    He, Qingbo

    2013-02-01

    Time-frequency feature is beneficial to representation of non-stationary signals for effective machinery fault diagnosis. The time-frequency distribution (TFD) is a major tool to reveal the synthetic time-frequency pattern. However, the TFD will also face noise corruption and dimensionality reduction issues in engineering applications. This paper proposes a novel nonlinear time-frequency feature based on a time-frequency manifold (TFM) technique. The new TFM feature is generated by mainly addressing manifold learning on the TFDs in a reconstructed phase space. It combines the non-stationary information and the nonlinear information of analyzed signals, and hence exhibits valuable properties. Specifically, the new feature is a quantitative low-dimensional representation, and reveals the intrinsic time-frequency pattern related to machinery health, which can effectively overcome the effects of noise and condition variance issues in sampling signals. The effectiveness and the merits of the proposed TFM feature are confirmed by case study on gear wear diagnosis, bearing defect identification and defect severity evaluation. Results show the value and potential of the new feature in machinery fault pattern representation and classification.

  20. Field-scale validation of an automated soil nitrate extraction and measurement system

    NARCIS (Netherlands)

    Sibley, K.J.; Astatkie, T.; Brewster, G.; Struik, P.C.; Adsett, J.F.; Pruski, K.

    2009-01-01

    One of the many gaps that needs to be solved by precision agriculture technologies is the availability of an economic, automated, on-the-go mapping system that can be used to obtain intensive and accurate ‘real-time’ data on the levels of nitrate nitrogen (NO3–N) in the soil. A soil nitrate mapping

  1. Screening for Anabolic Steroids in Urine of Forensic Cases Using Fully Automated Solid Phase Extraction and LC–MS-MS

    DEFF Research Database (Denmark)

    Andersen, David Wederkinck; Linnet, Kristian

    2014-01-01

    A screening method for 18 frequently measured exogenous anabolic steroids and the testosterone/epitestosterone (T/E) ratio in forensic cases has been developed and validated. The method involves a fully automated sample preparation including enzyme treatment, addition of internal standards...... and solid phase extraction followed by analysis by liquid chromatography-tandem mass spectrometry (LC-MS-MS) using electrospray ionization with adduct formation for two compounds. Urine samples from 580 forensic cases were analyzed to determine the T/E ratio and occurrence of exogenous anabolic steroids....... Extraction recoveries ranged from 77 to 95%, matrix effects from 48 to 78%, overall process efficiencies from 40 to 54% and the lower limit of identification ranged from 2 to 40 ng/mL. In the 580 urine samples analyzed from routine forensic cases, 17 (2.9%) were found positive for one or more anabolic...

  2. Sequential automated fusion/extraction chromatography methodology for the dissolution of uranium in environmental samples for mass spectrometric determination

    International Nuclear Information System (INIS)

    An improved methodology has been developed, based on dissolution by automated fusion followed by extraction chromatography for the detection and quantification of uranium in environmental matrices by mass spectrometry. A rapid fusion protocol (2/LiBr melts were used. The use of a M4 fusion unit also improved repeatability in sample preparation over muffle furnace fusion. Instrumental issues originating from the presence of high salt concentrations in the digestate after lithium metaborate fusion was also mitigated using an extraction chromatography (EXC) protocol aimed at removing lithium and interfering matrix constituants prior to the elution of uranium. The sequential methodology, which can be performed simultaneously on three samples, requires less than 20 min per sample for fusion and separation. It was successfully coupled to inductively coupled plasma mass spectrometry (ICP-MS) achieving detection limits below 100 pg kg-1 for 5-300 mg of sample.

  3. Screening for anabolic steroids in urine of forensic cases using fully automated solid phase extraction and LC-MS-MS.

    Science.gov (United States)

    Andersen, David W; Linnet, Kristian

    2014-01-01

    A screening method for 18 frequently measured exogenous anabolic steroids and the testosterone/epitestosterone (T/E) ratio in forensic cases has been developed and validated. The method involves a fully automated sample preparation including enzyme treatment, addition of internal standards and solid phase extraction followed by analysis by liquid chromatography-tandem mass spectrometry (LC-MS-MS) using electrospray ionization with adduct formation for two compounds. Urine samples from 580 forensic cases were analyzed to determine the T/E ratio and occurrence of exogenous anabolic steroids. Extraction recoveries ranged from 77 to 95%, matrix effects from 48 to 78%, overall process efficiencies from 40 to 54% and the lower limit of identification ranged from 2 to 40 ng/mL. In the 580 urine samples analyzed from routine forensic cases, 17 (2.9%) were found positive for one or more anabolic steroids. Only seven different steroids including testosterone were found in the material, suggesting that only a small number of common steroids are likely to occur in a forensic context. The steroids were often in high concentrations (>100 ng/mL), and a combination of steroids and/or other drugs of abuse were seen in the majority of cases. The method presented serves as a fast and automated screening procedure, proving the suitability of LC-MS-MS for analyzing anabolic steroids.

  4. Automated on-line liquid-liquid extraction system for temporal mass spectrometric analysis of dynamic samples.

    Science.gov (United States)

    Hsieh, Kai-Ta; Liu, Pei-Han; Urban, Pawel L

    2015-09-24

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid-liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053-2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h(-1)). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. PMID:26423626

  5. Automated Manual Transmission Fault Diagnosis Based on Hybrid System Characteristic%基于混杂系统特性的机械式自动变速器故障诊断策略

    Institute of Scientific and Technical Information of China (English)

    彭建鑫; 刘海鸥; 陈慧岩

    2012-01-01

    Automated manual transmission(AMT) fault diagnosis based on hybrid system characteristic is studied. The hybrid automata are used to build AMT automatic transmission system model and analysis is made on the consistent relationship between the behavior of trajectories and the hybrid system model. The fault definition of AMT automatic transmission system is analyzed and the fault behavior of AMT hybrid system is defined by the hybrid system model. The fault diagnosability of the hybrid system is studied based on the fault behavior of the AMT hybrid system. The relationship between fault, fault diagnosability, fault diagnostic accuracy and measurable system state variables is explained by the system trajectories and the concept of fault entropy is used to describe the degree of fault diagnosability. From the AMT system function, system model trajectories and control cycle perspective the behavior of AMT hybrid system is analyzed and classified to raise AMT automatic transmission system fault diagnosis strategy. The fault diagnosis strategy with the diagnostic algorithm is transplanted to the real car platform and a large number of vehicle mileage prove the strategy's correctness and real-time.%基于机械式自动变速器(Automated manual transmission,AMT)自动变速系统混杂特性,对AMT自动变速系统故障诊断技术进行研究.运用混杂自动机建立AMT自动变速系统模型,结合系统运行特点研究系统行为轨迹,分析混杂系统模型与行为轨迹之间一致性关系.分析AMT自动变速系统故障定义,结合混杂系统模型提出AMT混杂系统故障行为的定义.通过混杂系统故障行为轨迹,分析故障可诊断性条件,从系统故障模型和故障阈值条件的角度描述故障、可诊断性、诊断精度、系统可观测状态变量之间的关系,并运用故障熵的概念来描述故障可诊断的程度.从系统功能、系统轨迹、系统控制周期多角度对AMT混杂系统行为轨迹进

  6. Robust Text Extraction for Automated Processing of Multi-Lingual Personal Identity Documents

    Directory of Open Access Journals (Sweden)

    Pushpa B R

    2016-04-01

    Full Text Available Text extraction is a technique to extract the textual portion from non-textual background like images. It plays an important role in deciphering valuable information from images. Variation in text size, font, orientation, alignment, contrast etc. makes the task of text extraction challenging. Existing text extraction methods focus on certain regions of interest and address characteristics like noise, blur, distortion and variations in fonts makes text extraction difficult. This paper proposes a technique to extract textual characters from scanned personal identity document images. Current procedures keep track of user records manually and thus give way to inefficient practices and need for abundant time and human resources. The proposed methodology digitizes personal identity documents and eliminates the need for a large portion of the manual work involved in existing data entry and verification procedures. The proposed method has been experimented extensively with large datasets of varying sizes and image qualities. The results obtained indicate high accuracy in the extraction of important textual features from the document images.

  7. Toward automated parasitic extraction of silicon photonics using layout physical verifications

    Science.gov (United States)

    Ismail, Mohamed; El Shamy, Raghi S.; Madkour, Kareem; Hammouda, Sherif; Swillam, Mohamed A.

    2016-08-01

    A physical verification flow of the layout of silicon photonic circuits is suggested. Simple empirical models are developed to estimate the bend power loss and coupled power in photonic integrated circuits fabricated using SOI standard wafers. These models are utilized in physical verification flow of the circuit layout to verify reliable fabrication using any electronic design automation tool. The models are accurate compared with electromagnetic solvers. The models are closed form and circumvent the need to utilize any EM solver for the verification process. Hence, it dramatically reduces the time of the verification process.

  8. Automated Large Scale Parameter Extraction of Road-Side Trees Sampled by a Laser Mobile Mapping System

    Science.gov (United States)

    Lindenbergh, R. C.; Berthold, D.; Sirmacek, B.; Herrero-Huerta, M.; Wang, J.; Ebersbach, D.

    2015-08-01

    In urbanized Western Europe trees are considered an important component of the built-up environment. This also means that there is an increasing demand for tree inventories. Laser mobile mapping systems provide an efficient and accurate way to sample the 3D road surrounding including notable roadside trees. Indeed, at, say, 50 km/h such systems collect point clouds consisting of half a million points per 100m. Method exists that extract tree parameters from relatively small patches of such data, but a remaining challenge is to operationally extract roadside tree parameters at regional level. For this purpose a workflow is presented as follows: The input point clouds are consecutively downsampled, retiled, classified, segmented into individual trees and upsampled to enable automated extraction of tree location, tree height, canopy diameter and trunk diameter at breast height (DBH). The workflow is implemented to work on a laser mobile mapping data set sampling 100 km of road in Sachsen, Germany and is tested on a stretch of road of 7km long. Along this road, the method detected 315 trees that were considered well detected and 56 clusters of tree points were no individual trees could be identified. Using voxels, the data volume could be reduced by about 97 % in a default scenario. Processing the results of this scenario took ~2500 seconds, corresponding to about 10 km/h, which is getting close to but is still below the acquisition rate which is estimated at 50 km/h.

  9. Automated Control of the Organic and Inorganic Composition of Aloe vera Extracts Using (1)H NMR Spectroscopy.

    Science.gov (United States)

    Monakhova, Yulia B; Randel, Gabriele; Diehl, Bernd W K

    2016-09-01

    Recent classification of Aloe vera whole-leaf extract by the International Agency for Research and Cancer as a possible carcinogen to humans as well as the continuous adulteration of A. vera's authentic material have generated renewed interest in controlling A. vera. The existing NMR spectroscopic method for the analysis of A. vera, which is based on a routine developed at Spectral Service, was extended. Apart from aloverose, glucose, malic acid, lactic acid, citric acid, whole-leaf material (WLM), acetic acid, fumaric acid, sodium benzoate, and potassium sorbate, the quantification of Mg(2+), Ca(2+), and fructose is possible with the addition of a Cs-EDTA solution to sample. The proposed methodology was automated, which includes phasing, baseline-correction, deconvolution (based on the Lorentzian function), integration, quantification, and reporting. The NMR method was applied to 41 A. vera preparations in the form of liquid A. vera juice and solid A. vera powder. The advantages of the new NMR methodology over the previous method were discussed. Correlation between the new and standard NMR methodologies was significant for aloverose, glucose, malic acid, lactic acid, citric acid, and WLM (P < 0.0001, R(2) = 0.99). NMR was found to be suitable for the automated simultaneous quantitative determination of 13 parameters in A. vera. PMID:27413027

  10. Technical Note: Semi-automated effective width extraction from time-lapse RGB imagery of a remote, braided Greenlandic river

    Science.gov (United States)

    Gleason, C. J.; Smith, L. C.; Finnegan, D. C.; LeWinter, A. L.; Pitcher, L. H.; Chu, V. W.

    2015-06-01

    River systems in remote environments are often challenging to monitor and understand where traditional gauging apparatus are difficult to install or where safety concerns prohibit field measurements. In such cases, remote sensing, especially terrestrial time-lapse imaging platforms, offer a means to better understand these fluvial systems. One such environment is found at the proglacial Isortoq River in southwestern Greenland, a river with a constantly shifting floodplain and remote Arctic location that make gauging and in situ measurements all but impossible. In order to derive relevant hydraulic parameters for this river, two true color (RGB) cameras were installed in July 2011, and these cameras collected over 10 000 half hourly time-lapse images of the river by September of 2012. Existing approaches for extracting hydraulic parameters from RGB imagery require manual or supervised classification of images into water and non-water areas, a task that was impractical for the volume of data in this study. As such, automated image filters were developed that removed images with environmental obstacles (e.g., shadows, sun glint, snow) from the processing stream. Further image filtering was accomplished via a novel automated histogram similarity filtering process. This similarity filtering allowed successful (mean accuracy 79.6 %) supervised classification of filtered images from training data collected from just 10 % of those images. Effective width, a hydraulic parameter highly correlated with discharge in braided rivers, was extracted from these classified images, producing a hydrograph proxy for the Isortoq River between 2011 and 2012. This hydrograph proxy shows agreement with historic flooding observed in other parts of Greenland in July 2012 and offers promise that the imaging platform and processing methodology presented here will be useful for future monitoring studies of remote rivers.

  11. The ESO-LV project - Automated parameter extraction for 16000 ESO/Uppsala galaxies

    NARCIS (Netherlands)

    Lauberts, Andris; Valentijn, Edwin A.

    1987-01-01

    A program to extract photometric and morphological parameters of the galaxies in the ESO/Uppsala survey (Lauberts and Valentijn, 1982) is discussed. The completeness and accuracy of the survey are evaluated and compared with other surveys. The parameters obtained in the program are listed.

  12. An automated algorithm for extracting road edges from terrestrial mobile LiDAR data

    Science.gov (United States)

    Kumar, Pankaj; McElhinney, Conor P.; Lewis, Paul; McCarthy, Timothy

    2013-11-01

    Terrestrial mobile laser scanning systems provide rapid and cost effective 3D point cloud data which can be used for extracting features such as the road edge along a route corridor. This information can assist road authorities in carrying out safety risk assessment studies along road networks. The knowledge of the road edge is also a prerequisite for the automatic estimation of most other road features. In this paper, we present an algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data. The algorithm is based on a novel combination of two modified versions of the parametric active contour or snake model. The parameters involved in the algorithm are selected empirically and are fixed for all the road sections. We have developed a novel way of initialising the snake model based on the navigation information obtained from the mobile mapping vehicle. We tested our algorithm on different types of road sections representing rural, urban and national primary road sections. The successful extraction of road edges from these multiple road section environments validates our algorithm. These findings and knowledge provide valuable insights as well as a prototype road edge extraction tool-set, for both national road authorities and survey companies.

  13. Automated Extraction of Geospatial Features from Satellite Imagery: Computer Vision Oriented Plane Surveying

    Directory of Open Access Journals (Sweden)

    Usman Babawuro

    2012-11-01

    Full Text Available The paper explores and assesses the potential uses of high resolution satellite imagery and digital image processing algorithms for the auto detection and extraction of geospatial features, farmlands, for the purpose of statutory plane surveying tasks. The satellite imagery was georectified to provide the planar surface necessary for morphometric assessments followed by integrated image processing algorithms. Precisely, Canny edge algorithm followed by morphological closing as well as Hough transform for extracting lines of features was used. The algorithms were tested using Quick bird satellite imagery and in all cases we obtained encouraging results. This shows that computer vision and image processing using high resolution satellite imagery could be used for cadastration purposes, where property boundaries are needed and used for compensation purposes and other statutory surveying functions. The error matrix of the delineated boundaries is estimated as equal to 73.33%.

  14. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    International Nuclear Information System (INIS)

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  −0.6   ±   2.3° and  −1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment. (paper)

  15. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    Science.gov (United States)

    Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul

    2015-09-01

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  -0.6   ±   2.3° and  -1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.

  16. Application of automated image analysis to the identification and extraction of recyclable plastic bottles

    Institute of Scientific and Technical Information of China (English)

    Edgar SCAVINO; Dzuraidah Abdul WAHAB; Aini HUSSAIN; Hassan BASRI; Mohd Marzuki MUSTAFA

    2009-01-01

    An experimental machine vision apparatus was used to identify and extract recyclable plastic bottles out of a conveyor belt. Color images were taken with a commercially available Webcam, and the recognition was performed by our homemade software, based on the shape and dimensions of object images. The software was able to manage multiple bottles in a single image and was additionally extended to cases involving touching bottles. The identification was fulfilled by comparing the set of measured features with an existing database and meanwhile integrating various recognition techniques such as minimum distance in the feature space, self-organized maps, and neural networks. The recognition system was tested on a set of 50 different bottles and provided so far an accuracy of about 97% on bottle identification. The extraction of the bottles was performed by means of a pneumatic arm, which was activated according to the plastic type; polyethylene-terephthalate (PET) bottles were left on the conveyor belt, while non-PET boules were extracted. The software was designed to provide the best compromise between reliability and speed for real-time applications in view of the commercialization of the system at existing recycling plants.

  17. High-Level Analogue Fault Simulation Using Linear and Non-Linear Models

    Directory of Open Access Journals (Sweden)

    I. Bell

    1999-12-01

    Full Text Available A novel method for analogue high-level fault simulation (HLFS using linear and non-linear high-level fault models is presented. Our approach uses automated fault model synthesis and automated model selection for fault simulation. A speed up compared with transistor-level fault simulation can be achieved, whilst retaining both behavioural and fault coverage accuracy. The suggested method was verified in detail using short faults in a 10k state variable bandpass filter.

  18. Fault Tolerant Control Systems

    DEFF Research Database (Denmark)

    Bøgh, S.A.

    requirements for a dedicated software environment for fault tolerant control systems design. The second detailed study addressed the detection of a fault event and determination of the failed component. A variety of algorithms were compared, based on two fault scenarios in the speed governor actuator setup......This thesis considered the development of fault tolerant control systems. The focus was on the category of automated processes that do not necessarily comprise a high number of identical sensors and actuators to maintain safe operation, but still have a potential for improving immunity to component...... failures. It is often feasible to increase availability for these control loops by designing the control system to perform on-line detection and reconfiguration in case of faults before the safety system makes a close-down of the process. A general development methodology is given in the thesis...

  19. An Automated Graphical User Interface based System for the Extraction of Retinal Blood Vessels using Kirsch’s Template

    Directory of Open Access Journals (Sweden)

    Joshita Majumdar

    2015-06-01

    Full Text Available The assessment of Blood Vessel networks plays an important role in a variety of medical disorders. The diagnosis of Diabetic Retinopathy (DR and its repercussions including micro aneurysms, haemorrhages, hard exudates and cotton wool spots is one such field. This study aims to develop an automated system for the extraction of blood vessels from retinal images by employing Kirsch’s Templates in a MATLAB based Graphical User Interface (GUI. Here, a RGB or Grey image of the retina (Fundus Photography is used to obtain the traces of blood vessels. We have incorporated a range of Threshold values for the blood vessel extraction which would provide the user with greater flexibility and ease. This paper also deals with the more generalized implementation of various MATLAB functions present in the image processing toolbox of MATLAB to create a basic image processing editor with different features like noise addition and removal, image cropping, resizing & rotation, histogram adjust, separately viewing the red, green and blue components of a colour image along with brightness control, that are used in a basic image editor. We have combined both Kirsch’s Template and various MATLAB Algorithms to obtain enhanced images which would allow the ophthalmologist to edit and intensify the images as per his/her requirement for diagnosis. Even a non technical person can manage to identify severe discrepancies because of its user friendly appearance. The GUI contains very commonly used English Language viz. Load, Colour Contrast Panel, Image Clarity etc that can be very easily understood. It is an attempt to incorporate maximum number of image processing techniques under one GUI to obtain higher performance. Also it would provide a cost effective solution towards obtaining high definition and resolution images of blood vessel extracted Retina in economically backward regions where costly machine like OCT (Optical Coherence Tomography, MRI (Magnetic Resonance

  20. Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry.

    Science.gov (United States)

    Lacson, Ronilda; Harris, Kimberly; Brawarsky, Phyllis; Tosteson, Tor D; Onega, Tracy; Tosteson, Anna N A; Kaye, Abby; Gonzalez, Irina; Birdwell, Robyn; Haas, Jennifer S

    2015-10-01

    Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute's Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a "gold standard" based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6% of screening mammograms, 12.1% of diagnostic mammograms, and 9.4% of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. PMID:25561069

  1. An energy minimization approach to automated extraction of regular building footprints from airborne LiDAR data

    Science.gov (United States)

    He, Y.; Zhang, C.; Fraser, C. S.

    2014-08-01

    This paper presents an automated approach to the extraction of building footprints from airborne LiDAR data based on energy minimization. Automated 3D building reconstruction in complex urban scenes has been a long-standing challenge in photogrammetry and computer vision. Building footprints constitute a fundamental component of a 3D building model and they are useful for a variety of applications. Airborne LiDAR provides large-scale elevation representation of urban scene and as such is an important data source for object reconstruction in spatial information systems. However, LiDAR points on building edges often exhibit a jagged pattern, partially due to either occlusion from neighbouring objects, such as overhanging trees, or to the nature of the data itself, including unavoidable noise and irregular point distributions. The explicit 3D reconstruction may thus result in irregular or incomplete building polygons. In the presented work, a vertex-driven Douglas-Peucker method is developed to generate polygonal hypotheses from points forming initial building outlines. The energy function is adopted to examine and evaluate each hypothesis and the optimal polygon is determined through energy minimization. The energy minimization also plays a key role in bridging gaps, where the building outlines are ambiguous due to insufficient LiDAR points. In formulating the energy function, hard constraints such as parallelism and perpendicularity of building edges are imposed, and local and global adjustments are applied. The developed approach has been extensively tested and evaluated on datasets with varying point cloud density over different terrain types. Results are presented and analysed. The successful reconstruction of building footprints, of varying structural complexity, along with a quantitative assessment employing accurate reference data, demonstrate the practical potential of the proposed approach.

  2. Automated extraction of DNA from blood and PCR setup using a Tecan Freedom EVO liquid handler for forensic genetic STR typing of reference samples.

    Science.gov (United States)

    Stangegaard, Michael; Frøslev, Tobias G; Frank-Hansen, Rune; Hansen, Anders J; Morling, Niels

    2011-04-01

    We have implemented and validated automated protocols for DNA extraction and PCR setup using a Tecan Freedom EVO liquid handler mounted with the Te-MagS magnetic separation device (Tecan, Männedorf, Switzerland). The protocols were validated for accredited forensic genetic work according to ISO 17025 using the Qiagen MagAttract DNA Mini M48 kit (Qiagen GmbH, Hilden, Germany) from fresh whole blood and blood from deceased individuals. The workflow was simplified by returning the DNA extracts to the original tubes minimizing the risk of misplacing samples. The tubes that originally contained the samples were washed with MilliQ water before the return of the DNA extracts. The PCR was setup in 96-well microtiter plates. The methods were validated for the kits: AmpFℓSTR Identifiler, SGM Plus and Yfiler (Applied Biosystems, Foster City, CA), GenePrint FFFL and PowerPlex Y (Promega, Madison, WI). The automated protocols allowed for extraction and addition of PCR master mix of 96 samples within 3.5h. In conclusion, we demonstrated that (1) DNA extraction with magnetic beads and (2) PCR setup for accredited, forensic genetic short tandem repeat typing can be implemented on a simple automated liquid handler leading to the reduction of manual work, and increased quality and throughput. PMID:21609694

  3. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    Science.gov (United States)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  4. Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry

    Science.gov (United States)

    Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.

    2015-09-01

    Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.

  5. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    International Nuclear Information System (INIS)

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms

  6. Automated extraction of BI-RADS final assessment categories from radiology reports with natural language processing.

    Science.gov (United States)

    Sippo, Dorothy A; Warden, Graham I; Andriole, Katherine P; Lacson, Ronilda; Ikuta, Ichiro; Birdwell, Robyn L; Khorasani, Ramin

    2013-10-01

    The objective of this study is to evaluate a natural language processing (NLP) algorithm that determines American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) final assessment categories from radiology reports. This HIPAA-compliant study was granted institutional review board approval with waiver of informed consent. This cross-sectional study involved 1,165 breast imaging reports in the electronic medical record (EMR) from a tertiary care academic breast imaging center from 2009. Reports included screening mammography, diagnostic mammography, breast ultrasound, combined diagnostic mammography and breast ultrasound, and breast magnetic resonance imaging studies. Over 220 reports were included from each study type. The recall (sensitivity) and precision (positive predictive value) of a NLP algorithm to collect BI-RADS final assessment categories stated in the report final text was evaluated against a manual human review standard reference. For all breast imaging reports, the NLP algorithm demonstrated a recall of 100.0 % (95 % confidence interval (CI), 99.7, 100.0 %) and a precision of 96.6 % (95 % CI, 95.4, 97.5 %) for correct identification of BI-RADS final assessment categories. The NLP algorithm demonstrated high recall and precision for extraction of BI-RADS final assessment categories from the free text of breast imaging reports. NLP may provide an accurate, scalable data extraction mechanism from reports within EMRs to create databases to track breast imaging performance measures and facilitate optimal breast cancer population management strategies. PMID:23868515

  7. Automated Neuroanatomical Relation Extraction: A Linguistically Motivated Approach with a PVT Connectivity Graph Case Study

    Science.gov (United States)

    Gökdeniz, Erinç; Özgür, Arzucan; Canbeyli, Reşit

    2016-01-01

    Identifying the relations among different regions of the brain is vital for a better understanding of how the brain functions. While a large number of studies have investigated the neuroanatomical and neurochemical connections among brain structures, their specific findings are found in publications scattered over a large number of years and different types of publications. Text mining techniques have provided the means to extract specific types of information from a large number of publications with the aim of presenting a larger, if not necessarily an exhaustive picture. By using natural language processing techniques, the present paper aims to identify connectivity relations among brain regions in general and relations relevant to the paraventricular nucleus of the thalamus (PVT) in particular. We introduce a linguistically motivated approach based on patterns defined over the constituency and dependency parse trees of sentences. Besides the presence of a relation between a pair of brain regions, the proposed method also identifies the directionality of the relation, which enables the creation and analysis of a directional brain region connectivity graph. The approach is evaluated over the manually annotated data sets of the WhiteText Project. In addition, as a case study, the method is applied to extract and analyze the connectivity graph of PVT, which is an important brain region that is considered to influence many functions ranging from arousal, motivation, and drug-seeking behavior to attention. The results of the PVT connectivity graph show that PVT may be a new target of research in mood assessment.

  8. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    Energy Technology Data Exchange (ETDEWEB)

    Felfer, P., E-mail: peter.felfer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ceguerra, A.V., E-mail: anna.ceguerra@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ringer, S.P., E-mail: simon.ringer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Cairney, J.M., E-mail: julie.cairney@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia)

    2015-03-15

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms.

  9. Evaluation of Sample Stability and Automated DNA Extraction for Fetal Sex Determination Using Cell-Free Fetal DNA in Maternal Plasma

    Directory of Open Access Journals (Sweden)

    Elena Ordoñez

    2013-01-01

    Full Text Available Objective. The detection of paternally inherited sequences in maternal plasma, such as the SRY gene for fetal sexing or RHD for fetal blood group genotyping, is becoming part of daily routine in diagnostic laboratories. Due to the low percentage of fetal DNA, it is crucial to ensure sample stability and the efficiency of DNA extraction. We evaluated blood stability at 4°C for at least 24 hours and automated DNA extraction, for fetal sex determination in maternal plasma. Methods. A total of 158 blood samples were collected, using EDTA-K tubes, from women in their 1st trimester of pregnancy. Samples were kept at 4°C for at least 24 hours before processing. An automated DNA extraction was evaluated, and its efficiency was compared with a standard manual procedure. The SRY marker was used to quantify cfDNA by real-time PCR. Results. Although lower cfDNA amounts were obtained by automated DNA extraction (mean 107,35 GE/mL versus 259,43 GE/mL, the SRY sequence was successfully detected in all 108 samples from pregnancies with male fetuses. Conclusion. We successfully evaluated the suitability of standard blood tubes for the collection of maternal blood and assessed samples to be suitable for analysis at least 24 hours later. This would allow shipping to a central reference laboratory almost from anywhere in Europe.

  10. Automated Breast Cancer Diagnosis based on GVF-Snake Segmentation, Wavelet Features Extraction and Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Abderrahim Sebri

    2007-01-01

    Full Text Available Breast cancer accounts for the second most cancer diagnoses among women and the second most cancer deaths in the world. In fact, more than 11000 women die each year, all over the world, because this disease. The automatic breast cancer diagnosis is a very important purpose of medical informatics researches. Some researches has been oriented to make automatic the diagnosis at the step of mammographic diagnosis, some others treated the problem at the step of cytological diagnosis. In this work, we describes the current state of the ongoing the BC automated diagnosis research program. It is a software system that provides expert diagnosis of breast cancer based on three step of cytological image analysis. The first step is based on segmentation using an active contour for cell tracking and isolating of the nucleus in the studied image. Then from this nucleus, have been extracted some textural features using the wavelet transforms to characterize image using its texture, so that malign texture can be differentiated from benign on the assumption that tumoral texture is different from the texture of other kinds of tissues. Finally, the obtained features will be introduced as the input vector of a Multi-Layer Perceptron (MLP, to classify the images into malign and benign ones.

  11. Automated 3D Particle Field Extraction and Tracking System Using Digital in-line Holography

    Directory of Open Access Journals (Sweden)

    Hesham Eldeeb

    2006-01-01

    Full Text Available Digital holography for 3D particle field extraction and tracking is an active research topic. It has a great application in realizing characterization of micro-scale structures in microelectromechanical systems (MEMS with high resolution and accuracy. In-line configuration is studied in this study as the fundamental structure of a digital holography system. Digital holographic approach, not only eliminates wet chemical processing and mechanical scanning, but also enables the use of complex amplitude information inaccessible by optical reconstruction, thereby allowing flexible reconstruction algorithms to achieve optimization of specific information. However, owing to the inherently low pixel resolution of solid-state imaging sensors, digital holography gives poor depth resolution for images. This problem severely impairs the usefulness of digital holography especially in densely populated particle fields. This study describes a system that significantly improves particle axial-location accuracy by exploring the reconstructed complex amplitude information, compared with other numerical reconstruction schemes that are merely traditional optical reconstruction. Theoretical analysis and experimental results demonstrate that in-line configuration presents advantageous in enhancing the system performance. Greater flexibility of the system, higher lateral resolution and lower speckle noise can be achieved

  12. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    Science.gov (United States)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  13. Analysis of the influence of tectonics on the evolution valley network based on the SRTM DEM and the relationship of automatically extracted lineaments and the tectonic faults, Jemma River basin, Ethiopia

    Science.gov (United States)

    Kusák, Michal

    2016-04-01

    The Ethiopian Highland is good example of high plateau landscape formed by combination of tectonic uplift and episodic volcanism (Kazmin, 1975; Pik et al., 2003; Gani et al., 2009). Deeply incised gorges indicate active fluvial erosion which leads to instabilities of over-steepened slopes. In this study we focus on Jemma River basin which is a left tributary of Abay - Blue Nile to assess the influence of neotectonics on the evolution of its river and valley network. Tectonic lineaments, shape of valley networks, direction of river courses and intensity of fluvial erosion were compared in six subregions which were delineate beforehand by means of morphometric analysis. The influence of tectonics on the valley network is low in the older deep and wide canyons and in the and on the high plateau covered with Tertiary lava flows while younger upper part of the canyons it is high. Furthermore, the coincidence of the valley network with the tectonic lineaments differs in the subregions. The fluvial erosion along the main tectonic zones (NE-SW) direction made the way for backward erosion possible to reach far distant areas in E for the fluvial erosion. This tectonic zone also separates older areas in the W from the youngest landscape evolution subregions in the E, next to the Rift Valley. We studied the functions that can automatically extract lineaments in programs ArcGIS 10.1 and PCI Geomatica. The values of input parameters and their influence of the final shape and number of lineaments. A map of automated extracted lineaments was created and compared with 1) the tectonic faults by Geology Survey of Ethiopia (1996); and 2) the lineaments based on visual interpretation of by the author. The comparation of lineaments by automated visualization in GIS and visual interpretation of lineaments by the author proves that both sets of lineaments are in the same azimuth (NE-SW) - the same direction as the orientation of the rift. But it the mapping of lineaments by automated

  14. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    Science.gov (United States)

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can

  15. Automation of static and dynamic non-dispersive liquid phase microextraction. Part 1: Approaches based on extractant drop-, plug-, film- and microflow-formation.

    Science.gov (United States)

    Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján

    2016-02-01

    Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given. PMID:26772123

  16. Fuzzy Rulebase and Bpa Extracting Method for Distinguishing between Internal Fault and Inrush of 3-Phase Power Transformer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Tae; Lee, Seung Jae; Kang, Sang Hee; Choi, Myeon Song [Myongji University (Korea); Yoon, Sang Hyun; Lee, Tae Sung [Procom System (Korea)

    2001-07-01

    The four fuzzy criteria to distinguish the internal fault from the inrush for the power transformer protection have been identified. They are based on the wave shape, terminal voltage, fundamental and second harmonic component of differential current. A systematic way to determine the associated fuzzy membership function is also proposed. (author). 9 refs., 9 figs.

  17. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  18. Submicrometric Magnetic Nanoporous Carbons Derived from Metal-Organic Frameworks Enabling Automated Electromagnet-Assisted Online Solid-Phase Extraction.

    Science.gov (United States)

    Frizzarin, Rejane M; Palomino Cabello, Carlos; Bauzà, Maria Del Mar; Portugal, Lindomar A; Maya, Fernando; Cerdà, Víctor; Estela, José M; Turnes Palomino, Gemma

    2016-07-19

    We present the first application of submicrometric magnetic nanoporous carbons (μMNPCs) as sorbents for automated solid-phase extraction (SPE). Small zeolitic imidazolate framework-67 crystals are obtained at room temperature and directly carbonized under an inert atmosphere to obtain submicrometric nanoporous carbons containing magnetic cobalt nanoparticles. The μMNPCs have a high contact area, high stability, and their preparation is simple and cost-effective. The prepared μMNPCs are exploited as sorbents in a microcolumn format in a sequential injection analysis (SIA) system with online spectrophotometric detection, which includes a specially designed three-dimensional (3D)-printed holder containing an automatically actuated electromagnet. The combined action of permanent magnets and an automatically actuated electromagnet enabled the movement of the solid bed of particles inside the microcolumn, preventing their aggregation, increasing the versatility of the system, and increasing the preconcentration efficiency. The method was optimized using a full factorial design and Doehlert Matrix. The developed system was applied to the determination of anionic surfactants, exploiting the retention of the ion-pairs formed with Methylene Blue on the μMNPC. Using sodium dodecyl sulfate as a model analyte, quantification was linear from 50 to 1000 μg L(-1), and the detection limit was equal to 17.5 μg L(-1), the coefficient of variation (n = 8; 100 μg L(-1)) was 2.7%, and the analysis throughput was 13 h(-1). The developed approach was applied to the determination of anionic surfactants in water samples (natural water, groundwater, and wastewater), yielding recoveries of 93% to 110% (95% confidence level). PMID:27336802

  19. Rough Set Theory Based Approach for Fault Diagnosis Rule Extraction of Distribution System%基于粗糙集理论的配电网故障诊断规则提取方法

    Institute of Scientific and Technical Information of China (English)

    周永勇; 周湶; 刘佳宾

    2008-01-01

    As the first step of service restoration of distribution system, rapid fault diagnosis is a significant task for reducing power outage time, decreasing outage loss, and subsequently improving service reliability and safety. This paper analyzes a fault diagnosis approach by using rough set theory in which how to reduce decision table of data set is a main calculation intensive task. Aiming at this reduction problem, a heuristic reduction algorithm based on attribution length and frequency is proposed. At the same time, the corresponding value reduction method is proposed in order to fulfill the reduction and diagnosis rules extraction. Meanwhile, a Euclid matching method is introduced to solve confliction problems among the extracted rules when some information is lacking. Principal of the whole algorithm is clear and diagnostic rules distilled from the reduction are concise. Moreover, it needs less calculation towards specific discernibility matrix, and thus avoids the corresponding NP hard problem. The whole process is realized by MATLAB programming. A simulation example shows that the method has a fast calculation speed, and the extracted rules can reflect the characteristic of fault with a concise form. The rule database, formed by different reduction of decision table, can diagnose single fault and multi-faults efficiently, and give satisfied results even when the existed information is incomplete. The proposed method has good error-tolerate capability and the potential for on-line fault diagnosis.

  20. Soft Fault Diagnosis for Analog Circuits Based on Slope Fault Feature and BP Neural Networks

    Institute of Scientific and Technical Information of China (English)

    HU Mei; WANG Hong; HU Geng; YANG Shiyuan

    2007-01-01

    Fault diagnosis is very important for development and maintenance of safe and reliable electronic circuits and systems. This paper describes an approach of soft fault diagnosis for analog circuits based on slope fault feature and back propagation neural networks (BPNN). The reported approach uses the voltage relation function between two nodes as fault features; and for linear analog circuits, the voltage relation function is a linear function, thus the slope is invariant as fault feature. Therefore, a unified fault feature for both hard fault (open or short fault) and soft fault (parametric fault) is extracted. Unlike other NN-based diagnosis methods which utilize node voltages or frequency response as fault features, the reported BPNN is trained by the extracted feature vectors, the slope features are calculated by just simulating once for each component, and the trained BPNN can achieve all the soft faults diagnosis of the component. Experiments show that our approach is promising.

  1. A method for improving the reliability of distributed feeder automation fault location%一种提高分布式馈线自动化故障判定可靠性方法

    Institute of Scientific and Technical Information of China (English)

    张伟; 徐士华

    2013-01-01

    To improve the reliability of distributed feeder automation fault location, a distribution network switch group model is established. Based on switch group model, the failure process due to switch rejecting is researched. The fault handing principles are proposed based on logic operations. By supplementing the principle of single-shot reclosing, the extension of fault isolation scope is avoided effectively when the switch rejects to break. Through analyzing the switch rejecting to close, a secondary reclosing processing method is proposed to avoid fault isolation extending. The fault handling process is researched in three cases of communication failure of the adjacent switch. A uniform failure determination method is given. Incidental interference and permanent failure are discussed in switch protection signal failure. The research shows that incidental interference has no effect on fault isolation, while permanent failure will expand the scope of fault isolation. The results of many cases study show the feasibility and effectiveness of the proposed approaches.%为解决分布式馈线自动化故障判定过程中的可靠性问题,提出了配电网开关分组模型,依靠开关分组模型研究了开关拒分情况下的故障处理过程,并给出了一种基于逻辑运算的故障处理原则。对开关一次重合闸原则进行了补充,有效避免了开关拒分情况下的故障隔离范围扩大。对开关拒合情况进行了分析,增加了一种开关拒合情况下的二次重合功能,有效避免了开关拒合情况下的故障隔离范围扩大。分三种情况研究了相邻开关通信故障下的故障处理过程,并给出了一种统一的故障判定方法。对开关保护信号失灵分为互感器偶然性干扰及永久性故障分别进行了分析,研究表明偶然性故障对故障隔离无影响,永久性故障将导致故障隔离范围扩大。给出了实例分析,表明所提方法可行。

  2. Automated extraction of DNA from blood and PCR setup using a Tecan Freedom EVO liquid handler for forensic genetic STR typing of reference samples

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Frøslev, Tobias G; Frank-Hansen, Rune;

    2011-01-01

    We have implemented and validated automated protocols for DNA extraction and PCR setup using a Tecan Freedom EVO liquid handler mounted with the Te-MagS magnetic separation device (Tecan, Männedorf, Switzerland). The protocols were validated for accredited forensic genetic work according to ISO...... 17025 using the Qiagen MagAttract DNA Mini M48 kit (Qiagen GmbH, Hilden, Germany) from fresh whole blood and blood from deceased individuals. The workflow was simplified by returning the DNA extracts to the original tubes minimizing the risk of misplacing samples. The tubes that originally contained...... the samples were washed with MilliQ water before the return of the DNA extracts. The PCR was setup in 96-well microtiter plates. The methods were validated for the kits: AmpFlSTR Identifiler, SGM Plus and Yfiler (Applied Biosystems, Foster City, CA), GenePrint FFFL and PowerPlex Y (Promega, Madison, WI...

  3. Arsenic fractionation in agricultural soil using an automated three-step sequential extraction method coupled to hydride generation-atomic fluorescence spectrometry

    International Nuclear Information System (INIS)

    Highlights: • A fully automated flow-based modified-BCR extraction method was developed to evaluate the extractable As of soil. • The MSFIA–HG-AFS system included an UV photo-oxidation step for organic species degradation. • The accuracy and precision of the proposed method were found satisfactory. • The time analysis can be reduced up to eight times by using the proposed flow-based BCR method. • The labile As (F1 + F2) was <50% of total As in soil samples from As-contaminated-mining zones. - Abstract: A fully automated modified three-step BCR flow-through sequential extraction method was developed for the fractionation of the arsenic (As) content from agricultural soil based on a multi-syringe flow injection analysis (MSFIA) system coupled to hydride generation-atomic fluorescence spectrometry (HG-AFS). Critical parameters that affect the performance of the automated system were optimized by exploiting a multivariate approach using a Doehlert design. The validation of the flow-based modified-BCR method was carried out by comparison with the conventional BCR method. Thus, the total As content was determined in the following three fractions: fraction 1 (F1), the acid-soluble or interchangeable fraction; fraction 2 (F2), the reducible fraction; and fraction 3 (F3), the oxidizable fraction. The limits of detection (LOD) were 4.0, 3.4, and 23.6 μg L−1 for F1, F2, and F3, respectively. A wide working concentration range was obtained for the analysis of each fraction, i.e., 0.013–0.800, 0.011–0.900 and 0.079–1.400 mg L−1 for F1, F2, and F3, respectively. The precision of the automated MSFIA–HG-AFS system, expressed as the relative standard deviation (RSD), was evaluated for a 200 μg L−1 As standard solution, and RSD values between 5 and 8% were achieved for the three BCR fractions. The new modified three-step BCR flow-based sequential extraction method was satisfactorily applied for arsenic fractionation in real agricultural soil samples from

  4. Arsenic fractionation in agricultural soil using an automated three-step sequential extraction method coupled to hydride generation-atomic fluorescence spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Rosas-Castor, J.M. [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico); Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Portugal, L.; Ferrer, L. [Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Guzmán-Mar, J.L.; Hernández-Ramírez, A. [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico); Cerdà, V. [Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Hinojosa-Reyes, L., E-mail: laura.hinojosary@uanl.edu.mx [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico)

    2015-05-18

    Highlights: • A fully automated flow-based modified-BCR extraction method was developed to evaluate the extractable As of soil. • The MSFIA–HG-AFS system included an UV photo-oxidation step for organic species degradation. • The accuracy and precision of the proposed method were found satisfactory. • The time analysis can be reduced up to eight times by using the proposed flow-based BCR method. • The labile As (F1 + F2) was <50% of total As in soil samples from As-contaminated-mining zones. - Abstract: A fully automated modified three-step BCR flow-through sequential extraction method was developed for the fractionation of the arsenic (As) content from agricultural soil based on a multi-syringe flow injection analysis (MSFIA) system coupled to hydride generation-atomic fluorescence spectrometry (HG-AFS). Critical parameters that affect the performance of the automated system were optimized by exploiting a multivariate approach using a Doehlert design. The validation of the flow-based modified-BCR method was carried out by comparison with the conventional BCR method. Thus, the total As content was determined in the following three fractions: fraction 1 (F1), the acid-soluble or interchangeable fraction; fraction 2 (F2), the reducible fraction; and fraction 3 (F3), the oxidizable fraction. The limits of detection (LOD) were 4.0, 3.4, and 23.6 μg L{sup −1} for F1, F2, and F3, respectively. A wide working concentration range was obtained for the analysis of each fraction, i.e., 0.013–0.800, 0.011–0.900 and 0.079–1.400 mg L{sup −1} for F1, F2, and F3, respectively. The precision of the automated MSFIA–HG-AFS system, expressed as the relative standard deviation (RSD), was evaluated for a 200 μg L{sup −1} As standard solution, and RSD values between 5 and 8% were achieved for the three BCR fractions. The new modified three-step BCR flow-based sequential extraction method was satisfactorily applied for arsenic fractionation in real agricultural

  5. An intelligent distributed feeder automation fault judgment%一种智能分布式馈线自动化故障判定方法

    Institute of Scientific and Technical Information of China (English)

    张伟

    2013-01-01

    Fault judgment criteria independent of operation mode is proposed to solve the problems of fast removal of the fault and avoiding re-tuning of device parameters in the operation mode changes. A distribution network switch group model is established, based on which and relying on fault currents and active power direction, the criteria is converted to a series of logical operations. Logical value of a switch and the logic of the switch group value are calculated, and the logical value of the switch group is used to calculate the action of each switch logic value. Using this method, distribution network open/closed loop operation mode are analyzed in detail. For temporary failure, the coincidence mechanism is introduced which does not depend on the communication side of the switch voltage, effectively avoiding the fault isolation scope expansion in the exceptional circumstances of the communication. A grid distribution network with 4 mains and 15 disconnected switches is taken as an example when two failures simultaneously occur. It demonstrates the feasibility of the proposed method.%  为了快速切除故障,避免电网运行方式变化时设备参数重新整定,提出了一种不依赖运行方式的故障判定准则。建立了配电网的开关分组模型,在此基础上依靠故障电流及有功功率方向,将判定准则转换为一系列的逻辑运算。提出了一种开关逻辑值及开关组逻辑值的计算方法,并运用开关组逻辑值计算出各个开关的动作逻辑值。运用该方法对配网开闭环运行方式进行了详细分析。针对暂时性故障引入了一种不依赖通信的开关一侧失压重合机制,有效避免了通信异常情况下的故障隔离范围扩大。给出一个4电源,15分段开关的网格状配电网两处同时故障作为实例,表明了所提出方法的可行性。

  6. Study on the method of extracting rolling bearing fault signal%滚动轴承故障信号提取方法的研究

    Institute of Scientific and Technical Information of China (English)

    赖宋红

    2013-01-01

      滚动轴承的检测与故障诊断中主要应用的是振动分析法。采集轴承的振动信号,利用信号处理方法提取不同工作状态下信号的特征,通过这些特征,采用模式识别方法识别轴承状态,其中信号特征提取和状态识别是关键。本文研究了滚动轴承振动信号特征提取的主要方法。%  Detection of rol ing bearings and fault diagnosis mainly uses vibration analysis.We should col ect vibration signal of bearing,make use of signal processing methods to extract the characteristics of the signal under different working conditions,then through these characteristics and pattern recognition methods,we can identify the bearing condition.In this process the signal feature extraction and state recognition are vital.This paper discusses the main method of extraction of rol ing bearing vibration signal feature.

  7. An Evaluation of the SCSN Moment Tensor Solutions: Robustness of the M_w Magnitude Scale, Style of Faulting, and Automation of the Method

    OpenAIRE

    Clinton, John F.; Hauksson, Egill; Solanki, Kalpesh

    2006-01-01

    We have generated moment tensor solutions and moment magnitudes (M_w) for >1700 earthquakes of local magnitude (M_L) >3.0 that occurred from September 1999 to November 2005 in southern California. The method is running as an automated real-time component of the Southern California Seismic Network (SCSN), with solutions available within 12 min of event nucleation. For local events, the method can reliably obtain good-quality solutions for M_w with M_L >3.5, and for the moment tensor for events...

  8. Fault Tree Generation and Augmentation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Fault Management (FM) is one of the key components of system autonomy. In order to guarantee FM effectiveness and control the cost, tools are required to automate...

  9. A filter paper-based microdevice for low-cost, rapid, and automated DNA extraction and amplification from diverse sample types.

    Science.gov (United States)

    Gan, Wupeng; Zhuang, Bin; Zhang, Pengfei; Han, Junping; Li, Cai-Xia; Liu, Peng

    2014-10-01

    A plastic microfluidic device that integrates a filter disc as a DNA capture phase was successfully developed for low-cost, rapid and automated DNA extraction and PCR amplification from various raw samples. The microdevice was constructed by sandwiching a piece of Fusion 5 filter, as well as a PDMS (polydimethylsiloxane) membrane, between two PMMA (poly(methyl methacrylate)) layers. An automated DNA extraction from 1 μL of human whole blood can be finished on the chip in 7 minutes by sequentially aspirating NaOH, HCl, and water through the filter. The filter disc containing extracted DNA was then taken out directly for PCR. On-chip DNA purification from 0.25-1 μL of human whole blood yielded 8.1-21.8 ng of DNA, higher than those obtained using QIAamp® DNA Micro kits. To realize DNA extraction from raw samples, an additional sample loading chamber containing a filter net with an 80 μm mesh size was designed in front of the extraction chamber to accommodate sample materials. Real-world samples, including whole blood, dried blood stains on Whatman® 903 paper, dried blood stains on FTA™ cards, buccal swabs, saliva, and cigarette butts, can all be processed in the system in 8 minutes. In addition, multiplex amplification of 15 STR (short tandem repeat) loci and Sanger-based DNA sequencing of the 520 bp GJB2 gene were accomplished from the filters that contained extracted DNA from blood. To further prove the feasibility of integrating this extraction method with downstream analyses, "in situ" PCR amplifications were successfully performed in the DNA extraction chamber following DNA purification from blood and blood stains without DNA elution. Using a modified protocol to bond the PDMS and PMMA, our plastic PDMS devices withstood the PCR process without any leakage. This study represents a significant step towards the practical application of on-chip DNA extraction methods, as well as the development of fully integrated genetic analytical systems.

  10. Automated mini-column solid-phase extraction cleanup for high-throughput analysis of chemical contaminants in foods by low-pressure gas chromatography – tandem mass spectrometry

    Science.gov (United States)

    This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography – tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. ...

  11. Fault Estimation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems....

  12. Automated Contingency Management for Advanced Propulsion Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Automated Contingency Management (ACM), or the ability to confidently and autonomously adapt to fault conditions with the goal of still achieving mission...

  13. Zipper Faults

    Science.gov (United States)

    Platt, J. P.; Passchier, C. W.

    2015-12-01

    Intersecting simultaneously active pairs of faults with different orientations and opposing slip sense ("conjugate faults") present geometrical and kinematic problems. Such faults rarely offset each other, even when they have displacements of many km. A simple solution to the problem is that the two faults merge, either zippering up or unzippering, depending on the relationship between the angle of intersection and the slip senses. A widely recognized example of this is the so-called blind front developed in some thrust belts, where a backthrust branches off a decollement surface at depth. The decollement progressively unzippers, so that its hanging wall becomes the hanging wall of the backthrust, and its footwall becomes the footwall of the active decollement. The opposite situation commonly arises in core complexes, where conjugate low-angle normal faults merge to form a single detachment; in this case the two faults zipper up. Analogous situations may arise for conjugate pairs of strike-slip faults. We present kinematic and geometrical analyses of the Garlock and San Andreas faults in California, the Najd fault system in Saudi Arabia, the North and East Anatolian faults, the Karakoram and Altyn Tagh faults in Tibet, and the Tonale and Guidicarie faults in the southern Alps, all of which appear to have undergone zippering over distances of several tens to hundreds of km. The zippering process may produce complex and significant patterns of strain and rotation in the surrounding rocks, particularly if the angle between the zippered faults is large. A zippering fault may be inactive during active movement on the intersecting faults, or it may have a slip rate that differs from either fault. Intersecting conjugate ductile shear zones behave in the same way on outcrop and micro-scales.

  14. Determination of amlodipine in human plasma using automated online solid-phase extraction HPLC-tandem mass spectrometry: application to a bioequivalence study of Chinese volunteers.

    Science.gov (United States)

    Shentu, Jianzhong; Fu, Lizhi; Zhou, Huili; Hu, Xing Jiang; Liu, Jian; Chen, Junchun; Wu, Guolan

    2012-11-01

    An automated method (XLC-MS/MS) that uses online solid-phase extraction coupled with HPLC-tandem mass spectrometry was reported here for the first time to quantify amlodipine in human plasma. Automated pre-purification of plasma was performed using 10 mm × 2 mm HySphere C8 EC-SE online solid-phase extraction cartridges. After being eluted from the cartridge, the analyte and the internal standard were separated by HPLC and detected by tandem mass spectrometry. Mass spectrometric detection was achieved in the multiple reaction monitoring mode using a quadrupole tandem mass spectrometer in the positive electrospray ionization mode. The XLC-MS/MS method was validated and yielded excellent specificity. The calibration curve ranged from 0.10 to 10.22 ng/mL, and both the intra- and inter-day precision and accuracy values were within 8%. This method proved to be less laborious and was faster per analysis (high-throughput) than offline sample preparation methods. This method has been successfully applied in clinical pharmacokinetic and bioequivalence analyses. PMID:22770846

  15. A fully automated method for simultaneous determination of aflatoxins and ochratoxin A in dried fruits by pressurized liquid extraction and online solid-phase extraction cleanup coupled to ultra-high-pressure liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca

    2015-04-01

    According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results.

  16. Fault detection in reciprocating compressor valves under varying load conditions

    Science.gov (United States)

    Pichler, Kurt; Lughofer, Edwin; Pichler, Markus; Buchegger, Thomas; Klement, Erich Peter; Huschenbett, Matthias

    2016-03-01

    This paper presents a novel approach for detecting cracked or broken reciprocating compressor valves under varying load conditions. The main idea is that the time frequency representation of vibration measurement data will show typical patterns depending on the fault state. The problem is to detect these patterns reliably. For the detection task, we make a detour via the two dimensional autocorrelation. The autocorrelation emphasizes the patterns and reduces noise effects. This makes it easier to define appropriate features. After feature extraction, classification is done using logistic regression and support vector machines. The method's performance is validated by analyzing real world measurement data. The results will show a very high detection accuracy while keeping the false alarm rates at a very low level for different compressor loads, thus achieving a load-independent method. The proposed approach is, to our best knowledge, the first automated method for reciprocating compressor valve fault detection that can handle varying load conditions.

  17. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    Norbert Pfeifer

    2008-11-01

    Full Text Available Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  18. Demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data for the majority of United States harvested cropland

    Science.gov (United States)

    Yan, L.; Roy, D. P.

    2014-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning, and may be indicative of the degree of agricultural capital investment, mechanization, and labor intensity. To date, field objects have not been extracted from satellite data over large areas because of computational constraints, the complexity of the extraction task, and because consistently processed appropriate resolution data have not been available or affordable. A recently published automated methodology to extract agricultural crop fields from weekly 30 m Web Enabled Landsat data (WELD) time series was refined and applied to 14 states that cover 70% of harvested U.S. cropland (USDA 2012 Census). The methodology was applied to 2010 combined weekly Landsat 5 and 7 WELD data. The field extraction and quantitative validation results are presented for the following 14 states: Iowa, North Dakota, Illinois, Kansas, Minnesota, Nebraska, Texas, South Dakota, Missouri, Indiana, Ohio, Wisconsin, Oklahoma and Michigan (sorted by area of harvested cropland). These states include the top 11 U.S states by harvested cropland area. Implications and recommendations for systematic application to global coverage Landsat data are discussed.

  19. Feature extraction of rolling bearing's weak fault based on MED and FSK%基于MED及FSK的滚动轴承微弱故障特征提取

    Institute of Scientific and Technical Information of China (English)

    刘志川; 唐力伟; 曹立军

    2014-01-01

    In fault feature extraction of rolling bearings,the fault signal is usually very weak and emerged in strong background noise.The spectral kurtosis which has been used in fault feature extraction of rolling bearings is poor under strong background noise.The minimum entropy deconvolution (MED)and fast spectral kurtosis (FSK)were combined for weak fault feature extraction of rolling bearings.The MED was used to reduce the strong background noise in rolling bearing vibration signals,then the parameters of a demodulated resonance band-pass filter were chosen by FSK and finally the fault feature was extracted successfully through the energy operator demodulation envelop spectrum.The effectiveness of the proposed method was verified through simulation signal and experiment data.%针对强噪声情况滚动轴承故障特征较微弱、其故障特征较难提取问题,提出将最小熵反褶积(Minimum Entropy Deconvolution,MED)与快速谱峭度算法(Fast Spectral Kurtosis,FSK)结合用于滚动轴承微弱故障提取。用MED对强噪声滚动轴承振动信号降噪,对降噪后信号进行快速谱峭度计算,确定故障信号共振解调带通滤波器参数,结合能量算子解调包络谱提取故障特征。通过仿真与实验数据验证该方法的有效性。

  20. Automated Building Extraction from High-Resolution Satellite Imagery in Urban Areas Using Structural, Contextual, and Spectral Information

    Directory of Open Access Journals (Sweden)

    Jin Xiaoying

    2005-01-01

    Full Text Available High-resolution satellite imagery provides an important new data source for building extraction. We demonstrate an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas. Buildings are extracted using structural, contextual, and spectral information. First, a series of geodesic opening and closing operations are used to build a differential morphological profile (DMP that provides image structural information. Building hypotheses are generated and verified through shape analysis applied to the DMP. Second, shadows are extracted using the DMP to provide reliable contextual information to hypothesize position and size of adjacent buildings. Seed building rectangles are verified and grown on a finely segmented image. Next, bright buildings are extracted using spectral information. The extraction results from the different information sources are combined after independent extraction. Performance evaluation of the building extraction on an urban test site using IKONOS satellite imagery of the City of Columbia, Missouri, is reported. With the combination of structural, contextual, and spectral information, of the building areas are extracted with a quality percentage .

  1. Diagnosis Method for Analog Circuit Hard fault and Soft Fault

    Directory of Open Access Journals (Sweden)

    Baoru Han

    2013-09-01

    Full Text Available Because the traditional BP neural network slow convergence speed, easily falling in local minimum and the learning process will appear oscillation phenomena. This paper introduces a tolerance analog circuit hard fault and soft fault diagnosis method based on adaptive learning rate and the additional momentum algorithm BP neural network. Firstly, tolerance analog circuit is simulated by OrCAD / Pspice circuit simulation software, accurately extracts fault waveform data by matlab program automatically. Secondly, using the adaptive learning rate and momentum BP algorithm to train neural network, and then applies it to analog circuit hard fault and soft fault diagnosis. With shorter training time, high precision and global convergence effectively reduces the misjudgment, missing, it can improve the accuracy of fault diagnosis and fast.  

  2. Development of an automated batch-type solid-liquid extraction apparatus and extraction of Zr, Hf, and Th by triisooctylamine from HCl solutions for chemistry of element 104, Rf

    Energy Technology Data Exchange (ETDEWEB)

    Kasamatsu, Yoshitaka; Kino, Aiko; Yokokita, Takuya [Osaka Univ. (Japan). Graduate School of Science; and others

    2015-07-01

    Solid-liquid extraction of the group 4 elements Zr and Hf, which are homologues of Rf (Z = 104), and Th, a pseudo homologue, by triisooctylamine (TIOA) from HCl solutions was performed by batch method. After examining the time required to reach extraction equilibrium for these elements in various concentrations of TIOA and HCl, we investigated in detail variations in the distribution coefficients (K{sub d}) with TIOA and HCl concentrations. The K{sub d} values of Zr and Hf increased with increasing the HCl and TIOA concentrations, suggesting an increase in the abundance of the anionic chloride complexes of Zr and Hf. On the other hand, the K{sub d} values of Th were low in all the HCl concentrations studied, implying that Th does not form anionic species dominantly. We developed a new automated batch-type solid-liquid extraction apparatus for repetitive experiments on transactinide elements. Using this apparatus, we performed solid-liquid extraction employing the radioactive nuclides {sup 89m}Zr and {sup 175}Hf produced by nuclear reactions and transported continuously from the nuclear reaction chamber by the He/KCl gas-jet system. It was found that the distribution behaviors in 7-11 M HCl are almost constant in the time range 10-120 s, and the K{sub d} values are consistent with those obtained in the above manual experiment. This result suggests that the chemical reactions in the extraction process reach equilibrium within 10 s for Zr and Hf under the present experimental conditions. It took about 35 s for the extraction using the apparatus. These results indicate the applicability of the present extraction using the developed apparatus to {sup 261}Rf (T{sub 1/2} = 68 s) experiments.

  3. Fault Management Techniques in Human Spaceflight Operations

    Science.gov (United States)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  4. A fully automated system for analysis of pesticides in water: on-line extraction followed by liquid chromatography-tandem photodiode array/postcolumn derivatization/fluorescence detection.

    Science.gov (United States)

    Patsias, J; Papadopoulou-Mourkidou, E

    1999-01-01

    A fully automated system for on-line solid phase extraction (SPE) followed by high-performance liquid chromatography (HPLC) with tandem detection with a photodiode array detector and a fluorescence detector (after postcolumn derivatization) was developed for analysis of many chemical classes of pesticides and their major conversion products in aquatic systems. An automated on-line-SPE system (Prospekt) operated with reversed-phase cartridges (PRP-1) extracts analytes from 100 mL acidified (pH = 3) filtered water sample. On-line HPLC analysis is performed with a 15 cm C18 analytical column eluted with a mobile phase of phosphate (pH = 3)-acetonitrile in 25 min linear gradient mode. Solutes are detected by tandem diode array/derivatization/fluorescence detection. The system is controlled and monitored by a single computer operated with Millenium software. Recoveries of most analytes in samples fortified at 1 microgram/L are > 90%, with relative standard deviation values of < 5%. For a few very polar analytes, mostly N-methylcarbamoyloximes (i.e., aldicarb sulfone, methomyl, and oxamyl), recoveries are < 20%. However, for these compounds, as well as for the rest of the N-methylcarbamates except for aldicarb sulfoxide and butoxycarboxim, the limits of detection (LODs) are 0.005-0.05 microgram/L. LODs for aldicarb sulfoxide and butoxycarboxim are 0.2 and 0.1 microgram, respectively. LODs for the rest of the analytes except 4-nitrophenol, bentazone, captan, decamethrin, and MCPA are 0.05-0.1 microgram/L. LODs for the latter compounds are 0.2-1.0 microgram/L. The system can be operated unattended. PMID:10444834

  5. Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project

    Science.gov (United States)

    Perotti, Jose M. (Compiler)

    2014-01-01

    This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  6. Automated Identification of the Heart Wall Throughout the Entire Cardiac Cycle Using Optimal Cardiac Phase for Extracted Features

    Science.gov (United States)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.

  7. LFI: A Practical and General Library-Level Fault Injector

    OpenAIRE

    MARINESCU Paul; Candea, George

    2009-01-01

    Fault injection, a critical aspect of testing robust systems, is often overlooked in the development of general-purpose software. We believe this is due to the absence of easy-to-use tools and to the extensive manual labor required to perform fault injection tests. This paper introduces LFI (Library Fault Injector), a tool that automates the preparation of fault scenarios and their injection at the boundary between shared libraries and applications. LFI extends prior work by automatically pro...

  8. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  9. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one. PMID:26374396

  10. Fault detection and diagnosis for complex multivariable processes using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Weerasinghe, M

    1998-06-01

    the complex input-output mapping performed by a network, and are in general difficult to obtain. Statistical techniques and relationships between fuzzy systems and standard radial basis function networks have been exploited to prune a trained network and to extract qualitative rules that explain the network operation for fault diagnosis. Pruning the networks improved the fault classification, while offering simple qualitative rules on process behaviour. Automation of the pruning procedure introduced flexibility and ease of application of the methods. (author)

  11. Semi-automated extraction of microbial DNA from feces for qPCR and phylogenetic microarray analysis

    NARCIS (Netherlands)

    Nylund, L.; Heilig, G.H.J.; Salminen, S.; Vos, de W.M.; Satokari, R.M.

    2010-01-01

    The human gastrointestinal tract (GI-tract) harbors a complex microbial ecosystem, largely composed of so far uncultured species, which can be detected only by using techniques such as PCR and by different hybridization techniques including phylogenetic microarrays. Manual DNA extraction from feces

  12. Comparison of automated nucleic acid extraction methods for the detection of cytomegalovirus DNA in fluids and tissues

    Directory of Open Access Journals (Sweden)

    Jesse J. Waggoner

    2014-04-01

    Full Text Available Testing for cytomegalovirus (CMV DNA is increasingly being used for specimen types other than plasma or whole blood. However, few studies have investigated the performance of different nucleic acid extraction protocols in such specimens. In this study, CMV extraction using the Cell-free 1000 and Pathogen Complex 400 protocols on the QIAsymphony Sample Processing (SP system were compared using bronchoalveolar lavage fluid (BAL, tissue samples, and urine. The QIAsymphonyAssay Set-up (AS system was used to assemble reactions using artus CMV PCR reagents and amplification was carried out on the Rotor-Gene Q. Samples from 93 patients previously tested for CMV DNA and negative samples spiked with CMV AD-169 were used to evaluate assay performance. The Pathogen Complex 400 protocol yielded the following results: BAL, sensitivity 100% (33/33, specificity 87% (20/23; tissue, sensitivity 100% (25/25, specificity 100% (20/20; urine, sensitivity 100% (21/21, specificity 100% (20/20. Cell-free 1000 extraction gave comparable results for BAL and tissue, however, for urine, the sensitivity was 86% (18/21 and specimen quantitation was inaccurate. Comparative studies of different extraction protocols and DNA detection methods in body fluids and tissues are needed, as assays optimized for blood or plasma will not necessarily perform well on other specimen types.

  13. An automated flow injection system for metal determination by flame atomic absorption spectrometry involving on-line fabric disk sorptive extraction technique.

    Science.gov (United States)

    Anthemidis, A; Kazantzi, V; Samanidou, V; Kabir, A; Furton, K G

    2016-08-15

    A novel flow injection-fabric disk sorptive extraction (FI-FDSE) system was developed for automated determination of trace metals. The platform was based on a minicolumn packed with sol-gel coated fabric media in the form of disks, incorporated into an on-line solid-phase extraction system, coupled with flame atomic absorption spectrometry (FAAS). This configuration provides minor backpressure, resulting in high loading flow rates and shorter analytical cycles. The potentials of this technique were demonstrated for trace lead and cadmium determination in environmental water samples. The applicability of different sol-gel coated FPSE media was investigated. The on-line formed complex of metal with ammonium pyrrolidine dithiocarbamate (APDC) was retained onto the fabric surface and methyl isobutyl ketone (MIBK) was used to elute the analytes prior to atomization. For 90s preconcentration time, enrichment factors of 140 and 38 and detection limits (3σ) of 1.8 and 0.4μgL(-1) were achieved for lead and cadmium determination, respectively, with a sampling frequency of 30h(-1). The accuracy of the proposed method was estimated by analyzing standard reference materials and spiked water samples. PMID:27260436

  14. High quality DNA obtained with an automated DNA extraction method with 70+ year old formalin-fixed celloidin-embedded (FFCE) blocks from the indiana medical history museum.

    Science.gov (United States)

    Niland, Erin E; McGuire, Audrey; Cox, Mary H; Sandusky, George E

    2012-01-01

    DNA and RNA have been used as markers of tissue quality and integrity throughout the last few decades. In this research study, genomic quality DNA of kidney, liver, heart, lung, spleen, and brain were analyzed in tissues from post-mortem patients and surgical cancer cases spanning the past century. DNA extraction was performed on over 180 samples from: 70+ year old formalin-fixed celloidin-embedded (FFCE) tissues, formalin-fixed paraffin-embedded (FFPE) tissue samples from surgical cases and post-mortem cases from the 1970's, 1980's, 1990's, and 2000's, tissues fixed in 10% neutral buffered formalin/stored in 70% ethanol from the 1990's, 70+ year old tissues fixed in unbuffered formalin of various concentrations, and fresh tissue as a control. To extract DNA from FFCE samples and ethanol-soaked samples, a modified standard operating procedure was used in which all tissues were homogenized, digested with a proteinase K solution for a long period of time (24-48 hours), and DNA was extracted using the Autogen Flexstar automated extraction machine. To extract DNA from FFPE, all tissues were soaked in xylene to remove the paraffin from the tissue prior to digestion, and FFPE tissues were not homogenized. The results were as follows: celloidin-embedded and paraffin-embedded tissues yielded the highest DNA concentration and greatest DNA quality, while the formalin in various concentrations, and long term formalin/ethanol-stored tissue yielded both the lowest DNA concentration and quality of the tissues tested. The average DNA yield for the various fixatives was: 367.77 μg/ mL FFCE, 590.7 μg/mL FFPE, 53.74 μg/mL formalin-fixed/70% ethanol-stored and 33.2 μg/mL unbuffered formalin tissues. The average OD readings for FFCE, FFPE, formalin-fixed/70% ethanol-stored tissues, and tissues fixed in unbuffered formalin were 1.86, 1.87, 1.43, and 1.48 respectively. The results show that usable DNA can be extracted from tissue fixed in formalin and embedded in celloidin or

  15. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra;

    2016-01-01

    therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/....

  16. Automated chromatographic system with polarimetric detection laser applied in the control of fermentation processes and seaweed extracts characterization

    International Nuclear Information System (INIS)

    There are presented applications and innovations of chromatographic and polarimetric systems in which develop methodologies for measuring the input molasses and the resulting product of a fermentation process of alcohol from a rich honey and evaluation of the fermentation process honey servery in obtaining a drink native to the Yucatan region. Composition was assessed optically active substances in seaweed, of interest to the pharmaceutical industry. The findings provide measurements alternative raw materials and products of the sugar industry, beekeeping and pharmaceutical liquid chromatography with automated polarimetric detection reduces measurement times up to 15 min, making it comparable to the times of high chromatography resolution, significantly reducing operating costs. By chromatography system with polarimetric detection (SCDP) is new columns have included standard size designed by the authors, which allow process samples with volumes up to 1 ml and reduce measurement time to 15 min, decreasing to 5 times the volume sample and halving the time of measurement. Was evaluated determining the concentration of substances using the peaks of the chromatograms obtained for the different columns and calculate the uncertainty of measurements. The results relating to the improvement of a data acquisition program (ADQUIPOL v.2.0) and new programs for the preparation of chromatograms (CROMAPOL CROMAPOL V.1.0 and V.1.2) provide important benefits, which allow a considerable saving of time the processing of the results and can be applied in other chromatography systems with the appropriate adjustments. (Author)

  17. Automated extraction and assessment of functional features of areal measured microstructures using a segmentation-based evaluation method

    Science.gov (United States)

    Hartmann, Wito; Loderer, Andreas

    2014-10-01

    In addition to currently available surface parameters, according to ISO 4287:2010 and ISO 25178-2:2012—which are defined particularly for stochastic surfaces—a universal evaluation procedure is provided for geometrical, well-defined, microstructured surfaces. Since several million of features (like diameters, depths, etc) are present on microstructured surfaces, segmentation techniques are used for the automation of the feature-based dimensional evaluation. By applying an additional extended 3D evaluation after the segmentation and classification procedure, the accuracy of the evaluation is improved compared to the direct evaluation of segments, and additional functional parameters can be derived. Advantages of the extended segmentation-based evaluation method include not only the ability to evaluate the manufacturing process statistically (e.g. by capability indices, according to ISO 21747:2007 and ISO 3534-2:2013) and to derive statistical reliable values for the correction of microstructuring processes but also the direct re-use of the evaluated parameter (including its statistical distribution) in simulations for the calculation of probabilities with respect to the functionality of the microstructured surface. The practical suitability of this method is demonstrated using examples of microstructures for the improvement of sliding and ink transfers for printing machines.

  18. Effective Semi-Automated Extraction of Intact Mitochondria from Solid Tissues Using Gentle Mechanical Homogenization and Pressure Cycling Technology

    OpenAIRE

    Carlson, G.; Freeman, E.; Ivanov, A.R.; A. Lazarev; Gross, V. S.

    2011-01-01

    Impaired mitochondrial function has been linked to many diseases, such as stroke, heart disease, cancer, Type II diabetes and Parkinson's disease. Mitochondria-enriched preparations are needed for proteomic and metabolomic studies that may provide crucial insights into tissue-specific mitochondrial function and dysfunction, and answer fundamental questions of cell energetic and oxidative stress. Mitochondria extractions from whole tissue samples are typically performed using Potter-Elvehjem h...

  19. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco;

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavior...... of two IGBT modules rated at 1.7 kV / 1 kA and 1.7 kV / 1.4kA....

  20. ANN BASED FAULT DIAGNOSIS OF ROLLING ELEMENT BEARING USING TIME-FREQUENCY DOMAIN FEATURE

    Directory of Open Access Journals (Sweden)

    D.H. PANDYA

    2012-06-01

    Full Text Available This paper presents a methodology for an automation of fault diagnosis of ball bearings having localized defects (spalls on the various bearing components. The system uses the wavelet packet decomposition using ‘rbio5.5’ real mother wavelet function for feature extraction from the vibration signal, recorded for various bearing fault conditions. The decomposition level is determined by the sampling frequency and characteristic defect frequency. Maximum energy to minimum Shannon entropy ratio criteria is used for selection of best node of wavelet packet tree. The two features kurtosis and energy are extracted from the wavelet packet coefficient for selected node of WPT. The total 10 data sets at five different speeds corresponding to each bearing condition are recorded for fault classification. Thus, extracted features are used to train and test neural network with multi layer perceptron to classify the rolling element bearing condition as HB, ORD, IRD, BD and CD. The proposedartificial neural network with multi layer perceptron classifier has overall fault classification rate of 97 %.

  1. Fully automated analysis of beta-lactams in bovine milk by online solid phase extraction-liquid chromatography-electrospray-tandem mass spectrometry.

    Science.gov (United States)

    Kantiani, Lina; Farré, Marinella; Sibum, Martin; Postigo, Cristina; López de Alda, Miren; Barceló, Damiá

    2009-06-01

    A fully automated method for the detection of beta-lactam antibiotics, including six penicillins (amoxicillin, ampicillin, cloxacillin, dicloxacillin, oxacillin, and penicillin G) and four cephalosporins (cefazolin, ceftiofur, cefoperazone, and cefalexin) in bovine milk samples has been developed. The outlined method is based on online solid-phase extraction-liquid chromatography/electrospray-tandem mass spectrometry (SPE-LC/ESI-MS-MS). Target compounds were concentrated from 500 microL of centrifuged milk samples using an online SPE procedure with C18 HD cartridges. Target analytes were eluted with a gradient mobile phase (water + 0.1% formic acid/methanol + 0.1% formic acid) at a flow rate of 0.7 mL/min. Chromatographic separation was achieved within 10 min using a C-12 reversed phase analytical column. For unequivocal identification and confirmation, two multiple reaction monitoring (MRM) transitions were acquired for each analyte in the positive electrospray ionization mode (ESI(+)). Method limits of detection (LODs) in milk were well below the maximum residue limits (MRLs) set by the European Union for all compounds. Limits of quantification in milk were between 0.09 ng/mL and 1.44 ng/mL. The developed method was validated according to EU's requirements, and accuracy results ranged from 80 to 116%. Finally, the method was applied to the analysis of twenty real samples previously screened by the inhibition of microbial growth test Eclipse 100. This new developed method offers high sensitivity and accuracy of results, minimum sample pre-treatment, and uses for the first time an automated online SPE offering a high throughput analysis. Because of all these characteristics, the proposed method is applicable and could be deemed necessary within the field of food control and safety. PMID:19402673

  2. Automated Extraction of Cranial Landmarks from Computed Tomography Data using a Combined Method of Knowledge and Pattern Based Approaches

    Directory of Open Access Journals (Sweden)

    Roshan N. RAJAPAKSE

    2016-03-01

    Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.

  3. Monitoring and Fault Diagnosis for Batch Process Based on Feature Extract in Fisher Subspace%基于Fisher子空间特征提取的间歇过程监控和故障诊断

    Institute of Scientific and Technical Information of China (English)

    赵旭; 阎威武; 邵惠鹤

    2006-01-01

    Multivariate statistical process control methods have been widely used in biochemical industries. Batch process is usually monitored by the method of multi-way principal component analysis (MPCA). In this article, a new batch process monitoring and fault diagnosis method based on feature extract in Fisher subspace is proposed.The feature vector and the feature direction are extracted by projecting the high-dimension process data onto the low-dimension Fisher space. The similarity of feature vector between the current and the reference batch is calculated for on-line process monitoring and the contribution plot of weights in feature direction is calculated for fault diagnosis. The approach overcomes the need for estimating or tilling in the unknown portion of the process variables trajectories from the current time to the end of the batch. Simulation results on the benchmark model of penicillin fermentation process can demonstrate that in comparison to the MPCA method, the proposed method is more accurate and efficient for process monitoring and fault diagnosis.

  4. New insights on Southern Coyote Creek Fault and Superstition Hills Fault

    Science.gov (United States)

    van Zandt, A. J.; Mellors, R. J.; Rockwell, T. K.; Burgess, M. K.; O'Hare, M.

    2007-12-01

    Recent field work has confirmed an extension of the southern Coyote Creek (CCF) branch of the San Jacinto fault in the western Salton trough. The fault marks the western edge of an area of subsidence caused by groundwater extraction, and field measurements suggest that recent strike-slip motion has occurred on this fault as well. We attempt to determine whether this fault connects at depth with the Superstition Hills fault (SHF) to the southeast by modeling observed surface deformation between the two faults measured by InSAR. Stacked ERS (descending) InSAR data from 1992 to 2000 is initially modeled using a finite fault in an elastic half-space. Observed deformation along the SHF and Elmore Ranch fault is modeled assuming shallow (< 5 km) creep. We test various models to explain surface deformation between the two faults.

  5. miRSel: Automated extraction of associations between microRNAs and genes from the biomedical literature

    Directory of Open Access Journals (Sweden)

    Zimmer Ralf

    2010-03-01

    Full Text Available Abstract Background MicroRNAs have been discovered as important regulators of gene expression. To identify the target genes of microRNAs, several databases and prediction algorithms have been developed. Only few experimentally confirmed microRNA targets are available in databases. Many of the microRNA targets stored in databases were derived from large-scale experiments that are considered not very reliable. We propose to use text mining of publication abstracts for extracting microRNA-gene associations including microRNA-target relations to complement current repositories. Results The microRNA-gene association database miRSel combines text-mining results with existing databases and computational predictions. Text mining enables the reliable extraction of microRNA, gene and protein occurrences as well as their relationships from texts. Thereby, we increased the number of human, mouse and rat miRNA-gene associations by at least three-fold as compared to e.g. TarBase, a resource for miRNA-gene associations. Conclusions Our database miRSel offers the currently largest collection of literature derived miRNA-gene associations. Comprehensive collections of miRNA-gene associations are important for the development of miRNA target prediction tools and the analysis of regulatory networks. miRSel is updated daily and can be queried using a web-based interface via microRNA identifiers, gene and protein names, PubMed queries as well as gene ontology (GO terms. miRSel is freely available online at http://services.bio.ifi.lmu.de/mirsel.

  6. Debug automation from pre-silicon to post-silicon

    CERN Document Server

    Dehbashi, Mehdi

    2015-01-01

    This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of ...

  7. Intelligent Fault Diagnosis in Lead-zinc Smelting Process

    Institute of Scientific and Technical Information of China (English)

    Wei-Hua Gui; Chun-Hua Yang; Jing Teng

    2007-01-01

    According to the fault characteristic of the imperial smelting process (ISP), a novel intelligent integrated fault diagnostic system is developed. In the system fuzzy neural networks are utilized to extract fault symptom and expert system is employed for effective fault diagnosis of the process. Furthermore, fuzzy abductive inference is introduced to diagnose multiple faults. Feasibility of the proposed system is demonstrated through a pilot plant case study.

  8. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1998-08-01

    In this chapter, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerized relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  9. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy, Helsinki (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1996-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  10. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    Science.gov (United States)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  11. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  12. Fault tolerance and reliability in integrated ship control

    DEFF Research Database (Denmark)

    Nielsen, Jens Frederik Dalsgaard; Izadi-Zamanabadi, Roozbeh; Schiøler, Henrik

    2002-01-01

    Various strategies for achieving fault tolerance in large scale control systems are discussed. The positive and negative impacts of distribution through network communication are presented. The ATOMOS framework for standardized reliable marine automation is presented along with the corresponding...

  13. Development and validation of an automated liquid-liquid extraction GC/MS method for the determination of THC, 11-OH-THC, and free THC-carboxylic acid (THC-COOH) from blood serum.

    Science.gov (United States)

    Purschke, Kirsten; Heinl, Sonja; Lerch, Oliver; Erdmann, Freidoon; Veit, Florian

    2016-06-01

    The analysis of Δ(9)-tetrahydrocannabinol (THC) and its metabolites 11-hydroxy-Δ(9)-tetrahydrocannabinol (11-OH-THC), and 11-nor-9-carboxy-Δ(9)-tetrahydrocannabinol (THC-COOH) from blood serum is a routine task in forensic toxicology laboratories. For examination of consumption habits, the concentration of the phase I metabolite THC-COOH is used. Recommendations for interpretation of analysis values in medical-psychological assessments (regranting of driver's licenses, Germany) include threshold values for the free, unconjugated THC-COOH. Using a fully automated two-step liquid-liquid extraction, THC, 11-OH-THC, and free, unconjugated THC-COOH were extracted from blood serum, silylated with N-methyl-N-(trimethylsilyl) trifluoroacetamide (MSTFA), and analyzed by GC/MS. The automation was carried out by an x-y-z sample robot equipped with modules for shaking, centrifugation, and solvent evaporation. This method was based on a previously developed manual sample preparation method. Validation guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh) were fulfilled for both methods, at which the focus of this article is the automated one. Limits of detection and quantification for THC were 0.3 and 0.6 μg/L, for 11-OH-THC were 0.1 and 0.8 μg/L, and for THC-COOH were 0.3 and 1.1 μg/L, when extracting only 0.5 mL of blood serum. Therefore, the required limit of quantification for THC of 1 μg/L in driving under the influence of cannabis cases in Germany (and other countries) can be reached and the method can be employed in that context. Real and external control samples were analyzed, and a round robin test was passed successfully. To date, the method is employed in the Institute of Legal Medicine in Giessen, Germany, in daily routine. Automation helps in avoiding errors during sample preparation and reduces the workload of the laboratory personnel. Due to its flexibility, the analysis system can be employed for other liquid-liquid extractions as

  14. Development and validation of an automated liquid-liquid extraction GC/MS method for the determination of THC, 11-OH-THC, and free THC-carboxylic acid (THC-COOH) from blood serum.

    Science.gov (United States)

    Purschke, Kirsten; Heinl, Sonja; Lerch, Oliver; Erdmann, Freidoon; Veit, Florian

    2016-06-01

    The analysis of Δ(9)-tetrahydrocannabinol (THC) and its metabolites 11-hydroxy-Δ(9)-tetrahydrocannabinol (11-OH-THC), and 11-nor-9-carboxy-Δ(9)-tetrahydrocannabinol (THC-COOH) from blood serum is a routine task in forensic toxicology laboratories. For examination of consumption habits, the concentration of the phase I metabolite THC-COOH is used. Recommendations for interpretation of analysis values in medical-psychological assessments (regranting of driver's licenses, Germany) include threshold values for the free, unconjugated THC-COOH. Using a fully automated two-step liquid-liquid extraction, THC, 11-OH-THC, and free, unconjugated THC-COOH were extracted from blood serum, silylated with N-methyl-N-(trimethylsilyl) trifluoroacetamide (MSTFA), and analyzed by GC/MS. The automation was carried out by an x-y-z sample robot equipped with modules for shaking, centrifugation, and solvent evaporation. This method was based on a previously developed manual sample preparation method. Validation guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh) were fulfilled for both methods, at which the focus of this article is the automated one. Limits of detection and quantification for THC were 0.3 and 0.6 μg/L, for 11-OH-THC were 0.1 and 0.8 μg/L, and for THC-COOH were 0.3 and 1.1 μg/L, when extracting only 0.5 mL of blood serum. Therefore, the required limit of quantification for THC of 1 μg/L in driving under the influence of cannabis cases in Germany (and other countries) can be reached and the method can be employed in that context. Real and external control samples were analyzed, and a round robin test was passed successfully. To date, the method is employed in the Institute of Legal Medicine in Giessen, Germany, in daily routine. Automation helps in avoiding errors during sample preparation and reduces the workload of the laboratory personnel. Due to its flexibility, the analysis system can be employed for other liquid-liquid extractions as

  15. 基于多重分形去趋势波动分析的齿轮箱故障特征提取方法%Fault feature extraction of gearboxes based on multifractal detrended fluctuation analysis

    Institute of Scientific and Technical Information of China (English)

    林近山; 陈前

    2013-01-01

    Gearbox fault data are usually characterized by nonstationarity and multiple scaling behaviors, a detrended fluctuation analysis ( DFA) often fails to uncover their underlying dynamical mechanism. Multifractal DFA (MF-DFA) is an extension of DFA and able to effectively reveal their underlying dynamical mechanism hidden in nonstationary data with multiple scaling behaviors. To start with, MF-DFA was used to compute the multifractal singularity spectrum of gearbox fault data. Next, four characteristic parameters including multifractal spectrum width, maximum singularity exponent, minimum singularity exponent and singularity exponent corresponding to extremum of multifractal spectrum had clear physical meaning, they could express underlying dynamical mechanism of gearbox fault data and could be employed as fault features of gearbox fault data. Consequently, a novel method for feature extraction of gearbox fault data was proposed based on MF-DFA. Besides, the proposed method together with DFA was utilized to separate the normal, the slight-worn, the medium-worn and the broken-tooth vibration data from a four-speed motorcycle gearbox. The results showed that the proposed method overcomes the deficiencies of DFA, it is sensitive to small changes of gearbox fault conditions, it can totally separate the fault patterns close to each other and is a feasible method for feature extraction of gearbox fault data.%齿轮箱故障信号通常是具有多标度行为的非平稳信号,去趋势波动分析(Detrended Fluctuation Analysis,DFA)不能准确揭示隐藏在这类信号中的动力学行为.多重分形去趋势波动分析(Multifractal Detrended Fluctuation Analysis,MF-DFA)是DFA方法的拓展,能够有效地揭示隐藏在多标度非平稳信号中的动力学行为.利用MF-DFA计算齿轮箱故障信号的多重分形奇异谱,而多重分形奇异谱的宽度、最大奇异指数、最小奇异指数和极值点对应的奇异指数都具有明确的物理意义,

  16. Automated solvent concentrator

    Science.gov (United States)

    Griffith, J. S.; Stuart, J. L.

    1976-01-01

    Designed for automated drug identification system (AUDRI), device increases concentration by 100. Sample is first filtered, removing particulate contaminants and reducing water content of sample. Sample is extracted from filtered residue by specific solvent. Concentrator provides input material to analysis subsystem.

  17. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  18. Determination of talinolol in human plasma using automated on-line solid phase extraction combined with atmospheric pressure chemical ionization tandem mass spectrometry.

    Science.gov (United States)

    Bourgogne, Emmanuel; Grivet, Chantal; Hopfgartner, Gérard

    2005-06-01

    A specific LC-MS/MS assay was developed for the automated determination of talinolol in human plasma, using on-line solid phase extraction system (prospekt 2) combined with atmospheric pressure chemical ionization (APCI) tandem mass spectrometry. The method involved simple precipitation of plasma proteins with perchloric acid (contained propranolol) as the internal standard (IS) and injection of the supernatant onto a C8 End Capped (10 mmx2 mm) cartridge without any evaporation step. Using the back-flush mode, the analytes were transferred onto an analytical column (XTerra C18, 50 mmx4.6 mm) for chromatographic separation and mass spectrometry detection. One of the particularities of the assay is that the SPE cartridge is used as a column switching device and not as an SPE cartridge. Therefore, the same SPE cartridge could be used more than 28 times, significantly reducing the analysis cost. APCI ionization was selected to overcome any potential matrix suppression effects because the analyte and IS co-eluted. The mean precision and accuracy in the concentration range 2.5-200 ng/mL was found to be 103% and 7.4%, respectively. The data was assessed from QC samples during the validation phase of the assay. The lower limit of quantification was 2.5 ng/mL, using a 250 microL plasma aliquot. The LC-MS/MS method provided the requisite selectivity, sensitivity, robustness accuracy and precision to assess pharmacokinetics of the compound in several hundred human plasma samples. PMID:15866498

  19. Bearing fault detection using motor current signal analysis based on wavelet packet decomposition and Hilbert envelope

    Directory of Open Access Journals (Sweden)

    Imaouchen Yacine

    2015-01-01

    Full Text Available To detect rolling element bearing defects, many researches have been focused on Motor Current Signal Analysis (MCSA using spectral analysis and wavelet transform. This paper presents a new approach for rolling element bearings diagnosis without slip estimation, based on the wavelet packet decomposition (WPD and the Hilbert transform. Specifically, the Hilbert transform first extracts the envelope of the motor current signal, which contains bearings fault-related frequency information. Subsequently, the envelope signal is adaptively decomposed into a number of frequency bands by the WPD algorithm. Two criteria based on the energy and correlation analyses have been investigated to automate the frequency band selection. Experimental studies have confirmed that the proposed approach is effective in diagnosing rolling element bearing faults for improved induction motor condition monitoring and damage assessment.

  20. Feature extraction and fusion for thruster faults of AUV with random disturbance%随机干扰下 AUV 推进器故障特征提取与融合

    Institute of Scientific and Technical Information of China (English)

    张铭钧; 殷宝吉; 刘维新; 王玉甲

    2015-01-01

    The correctness of fault diagnosis results for thrusters of AUV (autonomous underwater vehicle) was frequently influenced by random disturbance ,which was caused by the internal noise of underwater sensors .To decrease the influence ,two feature extraction methods that extracting fault feature from the wavelet approximate component of longitudinal velocity and from the changing rate of control voltage ,and a feature fusion method with normalization were proposed .After the wavelet re‐construction of scale coefficients for wavelet decomposition of longitudinal velocity ,the wavelet ap‐proximate component was obtained .After the derivation of control voltage ,the changing rate was ac‐quired .Two kinds of fault feature were extracted from the wavelet approximate component and the changing rate based on modified Bayes′classification algorithm separately .Following the feature fu‐sion of the two kinds of fault feature based on evidence theory ,the fusion result were normalized .The effectiveness of the proposed methods was verified by the experiments of AUV ,which were carried out in the pool .%针对水下传感器自身噪声等随机干扰影响水下机器人推进器故障诊断结果的准确性问题,为降低随机干扰影响,提出了基于小波近似分量提取故障特征、基于控制信号变化率提取故障特征以及带有归一化处理的特征融合方法。将速度信号进行小波分解,对分解后的尺度系数进行小波重构得到小波近似分量;对控制信号进行求导,得到控制信号变化率。基于修正贝叶斯算法,分别从小波近似分量和控制信号变化率中提取故障特征。基于证据理论对提取到的两个单一特征进行融合,并将融合结果进行归一化处理。水下机器人实验样机的水池实验结果验证了所提方法的有效性。

  1. Toward Expanding Tremor Observations in the Northern San Andreas Fault System in the 1990s

    Science.gov (United States)

    Damiao, L. G.; Dreger, D. S.; Nadeau, R. M.; Taira, T.; Guilhem, A.; Luna, B.; Zhang, H.

    2015-12-01

    The connection between tremor activity and active fault processes continues to expand our understanding of deep fault zone properties and deformation, the tectonic process, and the relationship of tremor to the occurrence of larger earthquakes. Compared to tremors in subduction zones, known tremor signals in California are ~5 to ~10 smaller in amplitude and duration. These characteristics, in addition to scarce geographic coverage, lack of continuous data (e.g., before mid-2001 at Parkfield), and absence of instrumentation sensitive enough to monitor these events have stifled tremor detection. The continuous monitoring of these events over a relatively short time period in limited locations may lead to a parochial view of the tremor phenomena and its relationship to fault, tectonic, and earthquake processes. To help overcome this, we have embarked on a project to expand the geographic and temporal scope of tremor observation along the Northern SAF system using available continuous seismic recordings from a broad array of 100s of surface seismic stations from multiple seismic networks. Available data for most of these stations also extends back into the mid-1990s. Processing and analysis of tremor signal from this large and low signal-to-noise dataset requires a heavily automated, data-science type approach and specialized techniques for identifying and extracting reliable data. We report here on the automated, envelope based methodology we have developed. We finally compare our catalog results with pre-existing tremor catalogs in the Parkfield area.

  2. Method of Fault Area & Section Location for Non-solidly Earthed Distribution System

    Institute of Scientific and Technical Information of China (English)

    ZHENG Guping; JIANG Chao; LI Gang; QI Zheng; YANG Yihan

    2012-01-01

    Medium voltage distributions in China mainly use overhead lines; and most them are small current systems, whose single phase to-earth fault accounts for over 80% of the total failure in power grid. Fault monitoring is one of the main functions of distribution automation, so the new generation of power distribution automation systems in China should thoroughly solve the problem of the orientation of small current grounding fault.

  3. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  4. Interactive Fault Localization Using Test Information

    Institute of Scientific and Technical Information of China (English)

    Dan Hao; Lu Zhang; Tao Xie; Hong Mei; Jia-Su Sun

    2009-01-01

    Debugging is a time-consuming task in software development.Although various automated approaches have been proposed,they are not effective enough.On the other hand,in manual debugging,developers have difficulty in choosing breakpoints.To address these problems and help developers locate faults effectively,we propose an interactive fault-localization framework,combining the benefits of automated approaches and manual debugging.Before the fault is found,this framework continuously recommends checking points based on statements'suspicions.which are calculated according to the execution information of test cases and the feedback information from the developer at earlier checking points.Then we propose a naive approach.which is an initial implementation of this framework.However.with this naive approach or manual debugging,developers'wrong estimation of whether the faulty statement is executed before the checking point(breakpoint)may make the debugging process fail.So we propose another robust approach based on this framework,handling cases where developers make mistakes during the fault-localization process.We performed two experimental studies and the results show that the two interactive approaches are quite effective compared with existing fault-localization approaches.Moreover,the robust approach can help developers find faults when they make wrong estimation at some checking points.

  5. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    Science.gov (United States)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  6. Observer-based Fault Detection and Isolation for Nonlinear Systems

    DEFF Research Database (Denmark)

    Lootsma, T.F.

    With the rise in automation the increase in fault detectionand isolation & reconfiguration is inevitable. Interest in fault detection and isolation (FDI) for nonlinear systems has grown significantly in recent years. The design of FDI is motivated by the need for knowledge about occurring faults......-tolerance can be applied to ordinary industrial processes that are not categorized as high risk applications, but where high availability is desirable. The quality of fault-tolerant control is totally dependent on the quality of the underlying algorithms. They detect possible faults, and later reconfigure...... control software to handle the effects of the particular fault event. In the past mainly linear FDI methods were developed, but as most industrial plants show nonlinear behavior, nonlinear methods for fault diagnosis could probably perform better. This thesis considers the design of FDI for nonlinear...

  7. Radar Determination of Fault Slip and Location in Partially Decorrelated Images

    Science.gov (United States)

    Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Stough, Timothy; Pierce, Marlon; Wang, Jun

    2016-09-01

    Faced with the challenge of thousands of frames of radar interferometric images, automated feature extraction promises to spur data understanding and highlight geophysically active land regions for further study. We have developed techniques for automatically determining surface fault slip and location using deformation images from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), which is similar to satellite-based SAR but has more mission flexibility and higher resolution (pixels are approximately 7 m). This radar interferometry provides a highly sensitive method, clearly indicating faults slipping at levels of 10 mm or less. But interferometric images are subject to decorrelation between revisit times, creating spots of bad data in the image. Our method begins with freely available data products from the UAVSAR mission, chiefly unwrapped interferograms, coherence images, and flight metadata. The computer vision techniques we use assume no data gaps or holes; so a preliminary step detects and removes spots of bad data and fills these holes by interpolation and blurring. Detected and partially validated surface fractures from earthquake main shocks, aftershocks, and aseismic-induced slip are shown for faults in California, including El Mayor-Cucapah (M7.2, 2010), the Ocotillo aftershock (M5.7, 2010), and South Napa (M6.0, 2014). Aseismic slip is detected on the San Andreas Fault from the El Mayor-Cucapah earthquake, in regions of highly patterned partial decorrelation. Validation is performed by comparing slip estimates from two interferograms with published ground truth measurements.

  8. An Integrated Framework of Drivetrain Degradation Assessment and Fault Localization for Offshore Wind Turbines

    Directory of Open Access Journals (Sweden)

    Jay Lee

    2013-01-01

    Full Text Available As wind energy proliferates in onshore and offshore applications, it has become significantly important to predict wind turbine downtime and maintain operation uptime to ensure maximal yield. Two types of data systems have been widely adopted for monitoring turbine health condition: supervisory control and data acquisition (SCADA and condition monitoring system (CMS. Provided that research and development have focused on advancing analytical techniques based on these systems independently, an intelligent model that associates information from both systems is necessary and beneficial. In this paper, a systematic framework is designed to integrate CMS and SCADA data and assess drivetrain degradation over its lifecycle. Information reference and advanced feature extraction techniques are employed to procure heterogeneous health indicators. A pattern recognition algorithm is used to model baseline behavior and measure deviation of current behavior, where a Self-organizing Map (SOM and minimum quantization error (MQE method is selected to achieve degradation assessment. Eventually, the computation and ranking of component contribution to the detected degradation offers component-level fault localization. When validated and automated by various applications, the approach is able to incorporate diverse data resources and output actionable information to advise predictive maintenance with precise fault information. The approach is validated on a 3 MW offshore turbine, where an incipient fault is detected well before existing system shuts down the unit. A radar chart is used to illustrate the fault localization result.

  9. Process automation

    International Nuclear Information System (INIS)

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  10. Direct DNA isolation from solid biological sources without pretreatments with proteinase-K and/or homogenization through automated DNA extraction.

    Science.gov (United States)

    Ki, Jang-Seu; Chang, Ki Byum; Roh, Hee June; Lee, Bong Youb; Yoon, Joon Yong; Jang, Gi Young

    2007-03-01

    Genomic DNA from solid biomaterials was directly isolated with an automated DNA extractor, which was based on magnetic bead technology with a bore-mediated grinding (BMG) system. The movement of the bore broke down the solid biomaterials, mixed crude lysates thoroughly with reagents to isolate the DNA, and carried the beads to the next step. The BMG system was suitable for the mechanical homogenization of the solid biomaterials and valid as an automated system for purifying the DNA from the solid biomaterials without the need for pretreatment or disruption procedures prior to the application of the solid biomaterials.

  11. Automatic Fault Classification of Rolling Element Bearing using Wavelet Packet Decomposition and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Manish Yadav

    2011-09-01

    Full Text Available In this work an automatic fault classification system is developed for bearing fault classification of three phase induction motor. The system uses the wavelet packet decomposition using ‘db8’ motherwavelet function for feature extraction from the vibration signal, recorded for various bearing fault conditions. The selection of best node of wavelet packet tree is performed by using best tree algorithmalong with minimum Shannon entropy criteria. The ten statistical features such as peak value, root mean square value (RMS, kurtosis, skewness etc. are extracted from the wavelet packet coefficient of optimal node. The extracted feature then was used to train and test neural network fault classification. The artificial neural network system was designed to classify the rolling element bearing condition: healthy bearing (HB rolling element fault (REF, inner race fault (IRF and Outer race fault (ORF for fault classification. The over all fault classification rate is 98.33% of the artificial neural network fault classifier.

  12. Bearing initial fault feature extraction via sparse representation based on dictionary learning%基于字典学习的轴承早期故障稀疏特征提取

    Institute of Scientific and Technical Information of China (English)

    余发军; 周凤星; 严保康

    2016-01-01

    As initial fault occurs in rolling bearings of low-speed and heavy-duty machinery,the impulse component,reflecting the fault feature in vibration signals is difficult to extract because it is relatively weak and easily corrupted by strong background noise.The authors attempted to extract the impulse component from a vibration signal with the sparse representation method.However,it is difficult to construct an accurate dictionary that matches the impulse component since operation conditions of bearing are not stable.Hence,a method of extracting the initial fault feature, which is based on dictionary learning,was proposed in this research.Firstly,an adaptive dictionary was obtained by the developed K-SVD dictionary-learning algorithm.Then,Orthogonal Matching Pursuit (OMP)algorithm was utilized for sparse decomposition of the vibration signal,and all kurtosis values of approximation signal of iterations were calculated. Finally,the corresponding approximation signal of maximal kurtosis value was reconstructed and analyzed with the envelope spectrum to diagnose the fault type.The test results of simulated data and bearing vibration signals demonstrate that the proposed method,which can extract the feature component more accurately than other methods,meets the demand of real-time bearing condition monitoring.%针对低速重载机械滚动轴承早期故障的振动信号中故障特征冲击成分微弱易被噪声覆盖难以识别,而利用稀疏表示方法提取冲击成分时因轴承工况非平稳性,准确匹配冲击成分字典难以构造问题,提出基于字典学习的轴承早期故障稀疏特征提取方法。利用改进型 K-SVD 字典学习算法构造自适应字典;采用正交匹配追踪算法(Orthogonal Matching Pursuit,OMP)对振动信号进行稀疏分解,计算每次迭代逼近信号的峭度值,找出最大峭度值对应的逼近信号;重构特征成分并进行包络谱分析,获得故障类型。仿真及轴承

  13. Design of Fault Diagnosis Observer for HAGC System on Strip Rolling Mill

    Institute of Scientific and Technical Information of China (English)

    DONG Min; LIU Cai

    2006-01-01

    By building mathematical model for HAGC (hydraulic automation gauge control) system of strip rolling mill, treating faults as unknown inputs induced by model uncertainty, and analyzing fault direction, an unknown input fault diagnosis observer group was designed. Fault detection and isolation were realized through making observer residuals robust to specific faults but sensitive to other faults. Sufficient existence conditions and design of the observers were given in detail. Diagnosis observer parameters for servo valve, cylinder, roller and body rolling mill were obtained respectively. The effectiveness of this diagnosis method was proved by actual data simulations.

  14. Automation concepts for large space power systems

    Science.gov (United States)

    Imamura, M. S.; Moser, R.; Aichele, D.; Lanier, R., Jr.

    1983-01-01

    A study was undertaken to develop a methodology for analyzing, selecting, and implementing automation functions for multi-hundred-kW photovoltaic power systems intended for manned space station. The study involved identification of generic power system elements and their potential faults, definition of automation functions and their resulting benefits, and partitioning of automation functions between power subsystem, central spacecraft computer, and ground. Automation to a varying degree was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are indefinite lifetime, modular growth, high performance flexibility, a need to accommodate different electrical user load equipment, on-orbit assembly/maintenance/servicing, and potentially large number of power subsystem components. Functions that are good candidates for automation via expert system approach includes battery management and electrical consumables management.

  15. Space Station Initial Operational Concept (IOC) operations and safety view - Automation and robotics for Space Station

    Science.gov (United States)

    Bates, William V., Jr.

    1989-01-01

    The automation and robotics requirements for the Space Station Initial Operational Concept (IOC) are discussed. The amount of tasks to be performed by an eight-person crew, the need for an automated or directed fault analysis capability, and ground support requirements are considered. Issues important in determining the role of automation for the IOC are listed.

  16. Feature Extracting Method in the Rolling Element Bearing Fault Diagnosis: Spectrum Auto-correlation%滚动轴承故障特征提取的频谱自相关方法

    Institute of Scientific and Technical Information of China (English)

    明安波; 褚福磊; 张炜

    2012-01-01

    冲击调制是滚动轴承发生局部故障的重要特征,在频谱上表现为调制边频且能量主要集中在高频共振区非常不利于诊断.根据冲击序列在时域与频域具有相似冲击形式的特点,将时域的自相关概念引入频域,对滚动轴承内、外圈局部故障响应信号的频谱进行自相关分析.该方法将位于高频共振区的调制边频特征有效地转移到低频区,形成以故障特征频率为基频的谐频特征;对于内圈故障信号频谱自相关结果保持了载荷调制的边频.对6220型滚动轴承外圈和内圈点蚀故障信号的分析结果表明:频谱自相关分析在不设计带通滤波器选择共振频段时,分析过程更简洁,对内圈故障特征的提取比包络分析、倒谱分析及时域的自相关方法效果更好、抗噪能力更强、可信度更高,具有较高的工程应用价值.%As one of the most important features of the rolling element bearings with a localized defect/fault, the impulse characteristic is behaved as a series of side slopes, with the main impulse energy, located at two sides of the nature frequency in the spectrum, which is not convenient for the diagnosis. While the impulse series shared the similar character in the time and frequency domain, the auto-correlation method (SAC) is introduced to extract impulse character from the frequency domain and then used to analysis the frequency response of the vibration signals caused by a defect on the outer or inner race surface. Transferred from the sides of the nature frequency, the side slopes are present as harmonics of the characteristic frequency. When the defect located on the inner race, side slopes caused by the load modulation also remains around the harmonics. This method is also applied on the 6220 type rolling element bearing fault diagnosis. Leaving out of account of designing the nature band-pass filter, the extracting procedure is more convenient and simpler. It's shown that fault

  17. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    Science.gov (United States)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  18. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems...

  19. Programmable automation systems in PSA

    International Nuclear Information System (INIS)

    The Finnish safety authority (STUK) requires plant specific PSAs, and quantitative safety goals are set on different levels. The reliability analysis is more problematic when critical safety functions are realized by applying programmable automation systems. Conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. However, it is important to analyze contribution of programmable automation systems to the plant safety and PSA is the only method with system analytical view over the safety. This report discusses the applicability of PSA methodology (fault tree analyses, failure modes and effects analyses) in the analysis of programmable automation systems. The problem of how to decompose programmable automation systems for reliability modeling purposes is discussed. In addition to the qualitative analysis and structural reliability modeling issues, the possibility to evaluate failure probabilities of programmable automation systems is considered. One solution to the quantification issue is the use of expert judgements, and the principles to apply expert judgements is discussed in the paper. A framework to apply expert judgements is outlined. Further, the impacts of subjective estimates on the interpretation of PSA results are discussed. (orig.) (13 refs.)

  20. Research on the Fault Coefficient in Complex Electrical Engineering

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2015-08-01

    Full Text Available Fault detection and isolation in a complex system are research hotspots and frontier problems in the reliability engineering field. Fault identification can be regarded as a procedure of excavating key characteristics from massive failure data, then classifying and identifying fault samples. In this paper, based on the fundamental of feature extraction about the fault coefficient, we will discuss the fault coefficient feature in complex electrical engineering in detail. For general fault types in a complex power system, even if there is a strong white Gaussian stochastic interference, the fault coefficient feature is still accurate and reliable. The results about comparative analysis of noise influence will also demonstrate the strong anti-interference ability and great redundancy of the fault coefficient feature in complex electrical engineering.

  1. Quantification of five compounds with heterogeneous physicochemical properties (morphine, 6-monoacetylmorphine, cyamemazine, meprobamate and caffeine) in 11 fluids and tissues, using automated solid-phase extraction and gas chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Bévalot, Fabien; Bottinelli, Charline; Cartiser, Nathalie; Fanton, Laurent; Guitton, Jérôme

    2014-06-01

    An automated solid-phase extraction (SPE) protocol followed by gas chromatography coupled with tandem mass spectrometry was developed for quantification of caffeine, cyamemazine, meprobamate, morphine and 6-monoacetylmorphine (6-MAM) in 11 biological matrices [blood, urine, bile, vitreous humor, liver, kidney, lung and skeletal muscle, brain, adipose tissue and bone marrow (BM)]. The assay was validated for linearity, within- and between-day precision and accuracy, limits of quantification, selectivity, extraction recovery (ER), sample dilution and autosampler stability on BM. For the other matrices, partial validation was performed (limits of quantification, linearity, within-day precision, accuracy, selectivity and ER). The lower limits of quantification were 12.5 ng/mL(ng/g) for 6-MAM, morphine and cyamemazine, 100 ng/mL(ng/g) for meprobamate and 50 ng/mL(ng/g) for caffeine. Analysis of real-case samples demonstrated the performance of the assay in forensic toxicology to investigate challenging cases in which, for example, blood is not available or in which analysis in alternative matrices could be relevant. The SPE protocol was also assessed as an extraction procedure that could target other relevant analytes of interest. The extraction procedure was applied to 12 molecules of forensic interest with various physicochemical properties (alimemazine, alprazolam, amitriptyline, citalopram, cocaine, diazepam, levomepromazine, nordazepam, tramadol, venlafaxine, pentobarbital and phenobarbital). All drugs were able to be detected at therapeutic concentrations in blood and in the alternate matrices. PMID:24790060

  2. Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft

    Science.gov (United States)

    Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz

    2008-01-01

    In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.

  3. Research on the Sparse Representation for Gearbox Compound Fault Features Using Wavelet Bases

    Directory of Open Access Journals (Sweden)

    Chunyan Luo

    2015-01-01

    Full Text Available The research on gearbox fault diagnosis has been gaining increasing attention in recent years, especially on single fault diagnosis. In engineering practices, there is always more than one fault in the gearbox, which is demonstrated as compound fault. Hence, it is equally important for gearbox compound fault diagnosis. Both bearing and gear faults in the gearbox tend to result in different kinds of transient impulse responses in the captured signal and thus it is necessary to propose a potential approach for compound fault diagnosis. Sparse representation is one of the effective methods for feature extraction from strong background noise. Therefore, sparse representation under wavelet bases for compound fault features extraction is developed in this paper. With the proposed method, the different transient features of both bearing and gear can be separated and extracted. Both the simulated study and the practical application in the gearbox with compound fault verify the effectiveness of the proposed method.

  4. Automating Finance

    Science.gov (United States)

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  5. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  6. Fault tree handbook

    International Nuclear Information System (INIS)

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  7. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  8. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Haenninen, S. [VTT Energy, Espoo (Finland); Seppaenen, M. [North-Carelian Power Co (Finland); Antila, E.; Markkila, E. [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  9. Automated extraction of DNA from reference samples from various types of biological materials on the Qiagen BioRobot EZ1 Workstation

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Jørgensen, Mads; Hansen, Anders Johannes;

    2009-01-01

    We have validated and implemented a protocol for DNA extraction from various types of biological materials using a Qiagen BioRobot EZ1 Workstation. The sample materials included whole blood, blood from deceased, buccal cells on Omni swabs and FTA Cards, blood on FTA Cards and cotton swabs......, and muscle biopsies. The DNA extraction was validated according to EN/ISO 17025 for the STR kits AmpFlSTR« Identifiler« and AmpFlSTR« Yfiler« (Applied Biosystems). Of 298 samples extracted, 11 (4%) did not yield acceptable results. In conclusion, we have demonstrated that extraction of DNA from various types...... of biological material can be performed quickly and without the use of hazardous chemicals, and that the DNA may be successfully STR typed according to the requirements of forensic genetic investigations accredited according to EN/ISO 17025...

  10. Automated headspace-solid-phase micro extraction-retention time locked-isotope dilution gas chromatography-mass spectrometry for the analysis of organotin compounds in water and sediment samples.

    Science.gov (United States)

    Devosa, Christophe; Vliegen, Maarten; Willaert, Bart; David, Frank; Moens, Luc; Sandra, Pat

    2005-06-24

    An automated method for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT) and triphenyltin (TPhT) in water and sediment samples is described. The method is based on derivatization with sodium tetraethylborate followed by automated headspace-solid-phase micro extraction (SPME) combined with GC-MS under retention time locked (RTL) conditions. Home-synthesized deuterated organotin analogues were used as internal standards. Two high abundant fragment ions corresponding to the main tin isotopes Sn118 and Sn120 were chosen; one for quantification and one as qualifier ion. The method was validated and excellent figures of merit were obtained. Limits of quantification (LOQs) are from 1.3 to 15 ng l(-1) (ppt) for water samples and from 1.0 to 6.3 microg kg(-1) (ppb) for sediment samples. Accuracy for sediment samples was tested on spiked real-life sediment samples and on a reference PACS-2 marine harbor sediment. The developed method was used in a case-study at the harbor of Antwerp where sediment samples in different areas were taken and subsequently screened for TBT contamination. Concentrations ranged from 15 microg kg(-1) in the port of Antwerp up to 43 mg kg(-1) near a ship repair unit. PMID:16038329

  11. Automation Security

    OpenAIRE

    Mirzoev, Dr. Timur

    2014-01-01

    Web-based Automated Process Control systems are a new type of applications that use the Internet to control industrial processes with the access to the real-time data. Supervisory control and data acquisition (SCADA) networks contain computers and applications that perform key functions in providing essential services and commodities (e.g., electricity, natural gas, gasoline, water, waste treatment, transportation) to all Americans. As such, they are part of the nation s critical infrastructu...

  12. Graphical User Interface Aided Online Fault Diagnosis of Electric Motor - DC motor case study

    Directory of Open Access Journals (Sweden)

    POSTALCIOGLU OZGEN, S.

    2009-10-01

    Full Text Available This paper contains graphical user interface (GUI aided online fault diagnosis for DC motor. The aim of the research is to prevent system faults. Online fault diagnosis has been studied. Design of fault diagnosis has two main levels: Level 1 comprises a traditional control loop; Level 2 contains knowledge based fault diagnosis. Fault diagnosis technique contains feature extraction module, feature cluster module and fault decision module. Wavelet analysis has been used for the feature extraction module. For the feature cluster module, fuzzy cluster has been applied. Faults effects are examined on the system using statistical analysis. In this study Fault Diagnosis technique obtains fault detection, identification and halting the system. In the meantime graphical user interface (GUI is opened when fault is detected. GUI shows the measurement value, fault time and fault type. This property gives some information about the system to the personnel. As seen from the simulation results, faults can be detected and identified as soon as fault appears. In summary, if the system has a fault diagnosis structure, system dangerous situations can be avoided.

  13. Planetary Gearbox Fault Diagnosis Using Envelope Manifold Demodulation

    Directory of Open Access Journals (Sweden)

    Weigang Wen

    2016-01-01

    Full Text Available The important issue in planetary gear fault diagnosis is to extract the dependable fault characteristics from the noisy vibration signal of planetary gearbox. To address this critical problem, an envelope manifold demodulation method is proposed for planetary gear fault detection in the paper. This method combines complex wavelet, manifold learning, and frequency spectrogram to implement planetary gear fault characteristic extraction. The vibration signal of planetary gear is demodulated by wavelet enveloping. The envelope energy is adopted as an indicator to select meshing frequency band. Manifold learning is utilized to reduce the effect of noise within meshing frequency band. The fault characteristic frequency of the planetary gear is shown by spectrogram. The planetary gearbox model and test rig are established and experiments with planet gear faults are conducted for verification. All results of experiment analysis demonstrate its effectiveness and reliability.

  14. New low voltage (LV) distribution automation system

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, M.M.; Sulaiman, M. [National Technical Univ. College of Malaysia, Melaka (Malaysia)

    2007-07-01

    The challenge of supplying non-interrupted power from an electrical distribution system experiencing an electrical fault was discussed. Typically, a team of electricians is sent to the fault area to solve the problem. Since this is both time consuming and expensive, a new method called distribution automation system (DAS) has been proposed to address this challenge. The DAS is aimed at low voltage (LV) distribution systems. Under this newly developed DAS, only the consumer where the fault occurs will be affected. The automated system identifies the exact location of the fault and isolates the consumer from the rest of the power distribution system. The consumer will be reconnected to the system only after fault clearance. The system operates and controls the equipment connected at the substation and distribution line/zone/pole remotely. Linking is done by a power line communication (PLC) system with the help of a Remote Control Unit (RTU) and Supervisory Control and Data Acquisition (SCADA) system which improves the ability to monitor various equipment at the substation and at the consumer location. 5 refs., 1 tab., 18 figs.

  15. null Faults, null Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  16. Fault tolerant system design for uninterruptible power supplies

    Directory of Open Access Journals (Sweden)

    B. Y. Volochiy

    2012-02-01

    Full Text Available The problem of design for reliability of a fault tolerant system for uninterruptible power supplies is considered. Configuration of a fault tolerant system determines the structure of an uninterruptible power supply: power supply built from modules of the same type, stand-by sliding reserve for them, twice total reserve of the power supply with two accumulator batteries, the controls and diagnostics means. The developed tool for automated analytical model of fault tolerant systems generation and illustration of its capabilities in determination of requirements for repair service and accumulator batteries are given.

  17. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  18. Software fault tolerance

    OpenAIRE

    Kazinov, Tofik Hasanaga; Mostafa, Jalilian Shahrukh

    2009-01-01

    Because of our present inability to produce errorfree software, software fault tolerance is and will contiune to be an important consideration in software system. The root cause of software design errors in the complexity of the systems. This paper surveys various software fault tolerance techniquest and methodologies. They are two gpoups: Single version and Multi version software fault tolerance techniques. It is expected that software fault tolerance research will benefit from this research...

  19. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.;

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  20. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Bagnoli, F.;

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  1. Performance based fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2002-01-01

    Different aspects of fault detection and fault isolation in closed-loop systems are considered. It is shown that using the standard setup known from feedback control, it is possible to formulate fault diagnosis problems based on a performance index in this general standard setup. It is also shown...

  2. Diagnosis and Fault-tolerant Control, 2nd edition

    DEFF Research Database (Denmark)

    Blanke, Mogens; Kinnaert, Michel; Lunze, Jan;

    to ensure fault tolerance. Design methods for diagnostic systems and fault-tolerant controllers are presented for processes that are described by analytical models, by discrete-event models or that can be dealt with as quantised systems. Five case studies on pilot processes show the applicability......Fault-tolerant control aims at a graceful degradation of the behaviour of automated systems in case of faults. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults that bring about sudden shutdowns and loss of availability. The book...... of the presented methods. The theoretical results are illustrated by two running examples used throughout the book. The second edition includes new material about reconfigurable control, diagnosis of nonlinear systems, and remote diagnosis. The application examples are extended by a steering-by-wire system...

  3. Fault Monitoring and Re-Configurable Control for a Ship Propulsion Plant

    DEFF Research Database (Denmark)

    Blanke, M.; Izadi-Zamanabadi, Roozbeh; Lootsma, T.F.

    1998-01-01

    Minor faults in ship propulsion and their associated automation systems can cause dramatic reduction on ships' ability to propel and manoeuvre, and effective means are needed to prevent that simple faults develop into severe failure. The paper analyses the control system for a propulsion plant...

  4. 基于机载激光点云数据的电力线自动提取算法%An Automated Extraction Algorithm of Power Lines Based on Airborne Laser Scanning Data

    Institute of Scientific and Technical Information of China (English)

    尹辉增; 孙轩; 聂振钢

    2012-01-01

    An efficient automated extraction algorithm of power lines based on Airborne Laser Scanning ( ALS) data was put forward. The algorithm adopted point clouds classification based on region part height histogram distribution patterns,lines extraction method with global direction feature in Hough space, mathematical estimating method of hanging point position and local partitioned polynomial fitting method. Four key problems were solved by use of the algorithm,namely, point clouds classification, plane position extraction of power lines,power line hanging points extraction and power line fitting. Finally,the applicability of the algorithm was proved by some practical engineering data%设计并开发了一种从机载激光扫描的三维点云数据中自动提取电力线的算法,采用局部高程分布直方图模式分类滤波、Hough特征空间中全局方向特征优先的线特征提取、悬挂点位置数学推算和局部分段多项式拟合的方法,有效解决了电力线提取过程中电力线点云与电塔点云的自动分类、电力线平面位置提取、电力线悬挂点提取、电力线拟合问题.最后通过实际的工程数据验证了该算法的实用性.

  5. Analysis of fault using microcomputer protection by symmetrical component method

    Directory of Open Access Journals (Sweden)

    Ashish Choubey

    2012-09-01

    Full Text Available To enhance power supply reliability for the user terminals in the case of the distribution system to avoid interference by the fault again, rapidly complete the automatic identification, positioning, automatic fault isolation, network reconfiguration until the resumption of supply of non-fault section, a microprocessor-based relay protection device has developed. As the fault component theory is widely used in microcomputer protection, and fault component exists in the network of fault component, it is necessary to build up the fault component network when short circuit fault emerging and to draw the current and voltage component phasor diagram at fault point. In order to understand microcomputer protection based on the symmetrical component principle, we obtained the sequence current and sequence voltage according to the concept of symmetrical component. Distribution line directly to user-oriented power supply, the reliability of its operation determines the quality and level of electricity supply. In recent decades, because of the general power of the tireless efforts of scientists and technicians, relay protection technology and equipment application level has been greatly improved, but the current domestic production of computer hardware, protection devices are still outdated systems. Software development has maintenance difficulties and short survival time. With the factory automation system interface functions weak points, the network communication cannot meet the actual requirements. Protection principle configuration and device manufacturing process to be improved and so on.

  6. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    Science.gov (United States)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  7. Analytical Model-based Fault Detection and Isolation in Control Systems

    DEFF Research Database (Denmark)

    Vukic, Z.; Ozbolt, H.; Blanke, M.

    1998-01-01

    The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault diag...... diagnosis methods, often viewed as the classical or deterministic ones. Emphasis is placed on the algorithms suitable for ship automation, unmanned underwater vehicles, and other systems of automatic control....

  8. Fault-Tree Compiler

    Science.gov (United States)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  9. A Scrutiny of Automated Healthcare System with SFT

    Directory of Open Access Journals (Sweden)

    Jigna B. Prajapati

    2011-12-01

    Full Text Available In today’s techno savvy world, automated system is very important and contemporary issue. Automated systems are widely used at industries, appliance, automobile, undersea, space and healthcare over the past decade. The accuracy of Robots makes any system more acceptable. Here we use the Robots which assist us to manage patient’s heath. We always expect the system must work under any situation. The development of Robotic software is a complex and error prone process. Most complex systems contain software, and systems failures activated by software faults can provide lessons for software development practices and software quality assurance. They must be identified and removed as early as possible. The interrelationship between software faults and failures is quite intricate and obtaining a meaningful characterization. Towards this characterization, we have investigated and classified failures observed in Robotic system. In this paper, we describe the process used in our study for tracking faults. We present the different types of faults, their impact and fault classification. The concern thing is Faults classification and proposed way to manage them. Then we propose the fault tolerance techniques as single to covenant with different faults

  10. Automated extraction of lysergic acid diethylamide (LSD) and N-demethyl-LSD from blood, serum, plasma, and urine samples using the Zymark RapidTrace with LC/MS/MS confirmation.

    Science.gov (United States)

    de Kanel, J; Vickery, W E; Waldner, B; Monahan, R M; Diamond, F X

    1998-05-01

    A forensic procedure for the quantitative confirmation of lysergic acid diethylamide (LSD) and the qualitative confirmation of its metabolite, N-demethyl-LSD, in blood, serum, plasma, and urine samples is presented. The Zymark RapidTrace was used to perform fully automated solid-phase extractions of all specimen types. After extract evaporation, confirmations were performed using liquid chromatography (LC) followed by positive electrospray ionization (ESI+) mass spectrometry/mass spectrometry (MS/MS) without derivatization. Quantitation of LSD was accomplished using LSD-d3 as an internal standard. The limit of quantitation (LOQ) for LSD was 0.05 ng/mL. The limit of detection (LOD) for both LSD and N-demethyl-LSD was 0.025 ng/mL. The recovery of LSD was greater than 95% at levels of 0.1 ng/mL and 2.0 ng/mL. For LSD at 1.0 ng/mL, the within-run and between-run (different day) relative standard deviation (RSD) was 2.2% and 4.4%, respectively.

  11. Development and Test of Methods for Fault Detection and Isolation

    DEFF Research Database (Denmark)

    Jørgensen, R.B.

    Almost all industrial systemns are automated to ensure optimal production both in relation to energy consumtion and safety to equipment and humans. All working parts are individually subject to faults. This can lead to unacceptable economic loss or injury to people. This thesis deals with a...

  12. Improving the software fault localization process through testability information

    NARCIS (Netherlands)

    Gonzalez-Sanchez, A.; Abreu, R.; Gross, H.; Van Gemund, A.

    2010-01-01

    When failures occur during software testing, automated software fault localization helps to diagnose their root causes and identify the defective components of a program to support debugging. Diagnosis is carried out by selecting test cases in such way that their pass or fail information will narrow

  13. Rolling element bearing fault diagnosis via fault characteristic order (FCO) analysis

    Science.gov (United States)

    Wang, Tianyang; Liang, Ming; Li, Jianyong; Cheng, Weidong

    2014-03-01

    Order tracking based on time-frequency representation (TFR) is one of the most effective methods for gear fault detection under time-varying rotational speed without using a tachometer. However, for a rolling element bearing, the signal components related to rotational speed usually cannot be directly extracted from the TFR. As such, we propose a new method to solve this problem. This method consists of four main steps: (a) signal filtering via fast spectral kurtosis (SK) analysis - this together with the short time Fourier transform (STFT) leads to a TFR of the filtered signal with clear fault-revealing trend lines, (b) extraction of instantaneous fault characteristic frequency (IFCF) from the TFR using an amplitude-sum based spectral peak search algorithm, (c) signal resampling based on the extracted IFCF to convert the non-stationary time-domain signal into the stationary fault phase angle (FPA) domain signal, and (d) transform of the FPA domain signal into the domain of the fault characteristic order (FCO) and identification of fault type from the FCO spectrum. The effectiveness of the proposed method has been validated by both simulated and experimental bearing vibration signals.

  14. Intelligent Automated Diagnosis of Client Device Bottlenecks in Private Clouds

    CERN Document Server

    Widanapathirana, C; Sekercioglu, Y A; Ivanovich, M; Fitzpatrick, P; 10.1109/UCC.2011.42

    2012-01-01

    We present an automated solution for rapid diagnosis of client device problems in private cloud environments: the Intelligent Automated Client Diagnostic (IACD) system. Clients are diagnosed with the aid of Transmission Control Protocol (TCP) packet traces, by (i) observation of anomalous artifacts occurring as a result of each fault and (ii) subsequent use of the inference capabilities of soft-margin Support Vector Machine (SVM) classifiers. The IACD system features a modular design and is extendible to new faults, with detection capability unaffected by the TCP variant used at the client. Experimental evaluation of the IACD system in a controlled environment demonstrated an overall diagnostic accuracy of 98%.

  15. MODIFIED LAPLACIAN EIGENMAP METHOD FOR FAULT DIAGNOSIS

    Institute of Scientific and Technical Information of China (English)

    JIANG Quansheng; JIA Minping; HU Jianzhong; XU Feiyun

    2008-01-01

    A novel method based on the improved Laplacian eigenmap algorithm for fault pattern classification is proposed. Via modifying the Laplacian eigenmap algorithm to replace Euclidean distance with kernel-based geometric distance in the neighbor graph construction, the method can preserve the consistency of local neighbor information and effectively extract the low-dimensional manifold features embedded in the high-dimensional nonlinear data sets. A nonlinear dimensionality reduction algorithm based on the improved Laplacian eigenmap is to directly learn high-dimensional fault signals and extract the intrinsic manifold features from them. The method greatly preserves the global geometry structure information embedded in the signals, and obviously improves the classification performance of fault pattern recognition. The experimental results on both simulation and engineering indicate the feasibility and effectiveness of the new method.

  16. Automated Budget System

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  17. HIGHER ORDER SPECTRAL ANALYSIS IN FAULT DIAGNOSIS OF ROTORS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The nonlinear properties of rotating machinery vibration signals are presented. The relationship between faults and quadratic phase coupling is discussed. The mechanism that gives rise to quadratic phase coupling is analyzed, and the coupling models are summarized. As a result, higher order spectra analysis is introduced into fault diagnosis of rotors. A brief review of the properties of higher order spectra is presented. Furthermore, the bicoherence spectrum is employed to extract the features that signify the machinery condition. Experiments show that bicoherence spectrum patterns of different faults are quite different, so it is proposed to identify the faults in rotors.

  18. Automated extraction of building footprints from mobile LIDAR point clouds%车载激光扫描点云中建筑物边界的快速提取

    Institute of Scientific and Technical Information of China (English)

    魏征; 杨必胜; 李清泉

    2012-01-01

    以车载激光扫描点云数据为研究对象,提出一种由粗到细且快速获取点云中建筑物3维位置边界的方法.首先,通过分析格网内部点云的空间分布特征(平面距离、高程差异和点密集程度等)确定激光扫描点的权值,采用距离加权倒数IDW(Inverse Distance Weighted)内插方法生成车载激光扫描点云的特征图像.然后,采用阈值分割、轮廓提取与跟踪等手段提取特征图像中的建筑物目标的粗糙边界.最后,对粗糙边界内部的建筑物目标点云进行平面分割,提取建筑物的立面特征并构建立面不规则三角网TIN(Triangulated Irregular Network),并在建筑物先验框架知识条件下自动提取建筑物的精确3维位置边界.%This paper presents a novel method for automated extraction of building footprints from mobile LIDAR point clouds. We first generate the georeferenced feature image of mobile LIDAR point clouds using an interpolation method, and adopt image segmentation and contour extraction and tracing to extract building boundaries in the geo-referenced feature image as the coarse level of building footprints in Two-dimensional imagery space. Then, the coarse level of building footprints is further refined by applying planar segmentation on the extracted point clouds in the building boundaries. Finally, the triangulated irregular network (TIN) is used to achieve the fine level of building footprints. Dataset of residential areas captured by Optech's LYNX mobile mapping system was tested to check the validities of the proposed method. Experimental results show that the proposed method provides a promising and valid solution for automatically extracting building footprints from mobile LIDAR point clouds.

  19. Automated Extraction of Building Facade Footprints from Mobile LiDAR Point Clouds%车载LiDAR点云中建筑物立面位置边界的自动提取

    Institute of Scientific and Technical Information of China (English)

    魏征; 董震; 李清泉; 杨必胜

    2012-01-01

    提出了一种基于点云特征图像和特征值分析的车载LiDAR点云建筑物立面位置边界的自动提取方法。首先利用车载LiDAR点云数据生成扫描区域的点云特征图像,并通过图像处理手段提取可能的建筑物目标点云;然后对提取的目标点云进行剖面分析和特征值分析,识别建筑物目标;最后对建筑物点云进行平面分割,提取建筑物立面,并对立面点云进行特征值分析,得到建筑物立面与地面交接的三维位置边界。实验结果表明,该方法能快速有效地提取车载LiDAR点云数据中的建筑物目标,同时提取的建筑物立面位置边界与原始点云能准确符合。%We present a novel method for automated extraction of building facade footprints from mobile LiDAR point clouds.The proposed method first generates the georeferenced feature image of a mobile LiDAR point cloud and then uses image segmentation to extract contour areas which contain facade points of buildings,points of trees and points of other objects in the georeferenced feature image.After all the points in each contour area are extracted,a classification based on eigenvalue analysis and profile analysis is adopted to identify building objects from point clouds extracted in contour areas.Then all the points in a building object are segmented into different planes using RANSAC algorithm.For each building,points in facade planes are chosen to calculate the direction,the start point,and the end point of the facade footprint using eigenvalue analysis.Finally,footprints of different facades of building are refined,harmonized,and joined.The experimental results show that the proposed method provides a promising and valid solution for automatically extracting building facade footprints from mobile LiDAR point clouds.

  20. Transformer Internal Faults Simulation

    Directory of Open Access Journals (Sweden)

    KOOCHAKI, A.

    2008-06-01

    Full Text Available This paper presents a novel method of modeling internal faults in a power transformer. The method leads to a model which is compatible with commercial phasor-based software packages. Consequently; it enables calculation of fault currents in any branch of the network due to a winding fault of a power transformer. These currents can be used for evaluation of protective relays' performance and can lead to better setting of protective functions.

  1. Support Vector Machine for mechanical faults classification

    Institute of Scientific and Technical Information of China (English)

    JIANG Zhi-qiang; FU Han-guang; LI Ling-jun

    2005-01-01

    Support Vector Machine (SVM) is a machine learning algorithm based on the Statistical Learning Theory (SLT), which can get good classification effects with a few learning samples. SVM represents a new approach to pattern classification and has been shown to be particularly successful in many fields such as image identification and face recognition. It also provides us with a new method to develop intelligent fault diagnosis. This paper presents an SVM based approach for fault diagnosis of rolling bearings. Experimentation with vibration signals of bearing was conducted. The vibration signals acquired from the bearings were directly used in the calculating without the preprocessing of extracting its features. Compared with the Artificial Neural Network (ANN) based method, the SVM based method has desirable advantages. Also a multi-fault SVM classifier based on binary classifier is constructed for gear faults in this paper. Other experiments with gear fault samples showed that the multi-fault SVM classifier has good classification ability and high efficiency in mechanical system. It is suitable for online diagnosis for mechanical system.

  2. Detection of Staphylococcus aureus enterotoxin production genes from patient samples using an automated extraction platform and multiplex real-time PCR.

    Science.gov (United States)

    Chiefari, Amy K; Perry, Michael J; Kelly-Cirino, Cassandra; Egan, Christina T

    2015-12-01

    To minimize specimen volume, handling and testing time, we have developed two TaqMan(®) multiplex real-time PCR (rtPCR) assays to detect staphylococcal enterotoxins A-E and Toxic Shock Syndrome Toxin production genes directly from clinical patient stool specimens utilizing a novel lysis extraction process in parallel with the Roche MagNA Pure Compact. These assays are specific, sensitive and reliable for the detection of the staphylococcal enterotoxin encoding genes and the tst1 gene from known toxin producing strains of Staphylococcus aureus. Specificity was determined by testing a total of 47 microorganism strains, including 8 previously characterized staphylococcal enterotoxin producing strains against each rtPCR target. Sensitivity for these assays range from 1 to 25 cfu per rtPCR reaction for cultured isolates and 8-20 cfu per rtPCR for the clinical stool matrix.

  3. Enantioselective determination of methylphenidate and ritalinic acid in whole blood from forensic cases using automated solid-phase extraction and liquid chromatography-tandem mass spectrometry

    DEFF Research Database (Denmark)

    Thomsen, Ragnar; B. Rasmussen, Henrik; Linnet, Kristian;

    2012-01-01

    by solid-phase extraction. The LC–MS-MS method was linear in the range of 0.5 to 500 ng/g for the enantiomers of both analytes. For concentrations above the limit of quantification, coefficients of variation were 15% or less, and the accuracy was 89 to 94%. For 12 postmortem samples in which...... methylphenidate was not determined to be related to the cause of death, the femoral blood concentration of d-methylphenidate ranged from 5 to 58 ng/g, and from undetected to 48 ng/g for l-methylphenidate (median d/l-ratio 5.9). Ritalinic acid was present at concentrations 10–20 times higher with roughly equal...

  4. Fault Locating, Prediction and Protection (FLPPS)

    Energy Technology Data Exchange (ETDEWEB)

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  5. Fault Locating, Prediction and Protection (FLPPS)

    Energy Technology Data Exchange (ETDEWEB)

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  6. Automated Periodontal Diseases Classification System

    OpenAIRE

    Aliaa A. A. Youssif; Abeer Saad Gawish,; Mohammed Elsaid Moussa

    2012-01-01

    This paper presents an efficient and innovative system for automated classification of periodontal diseases, The strength of our technique lies in the fact that it incorporates knowledge from the patients' clinical data, along with the features automatically extracted from the Haematoxylin and Eosin (H&E) stained microscopic images. Our system uses image processing techniques based on color deconvolution, morphological operations, and watershed transforms for epithelium & connective tissue se...

  7. Maneuver Automation Software

    Science.gov (United States)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; Illsley, Jeannette

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  8. Using Order Tracking Analysis Method to Detect the Angle Faults of Blades on Wind Turbine

    DEFF Research Database (Denmark)

    Li, Pengfei; Hu, Weihao; Liu, Juncheng;

    2016-01-01

    proposes a novel method of using order tracking analysis to analyze the signal of input aerodynamic torque which is received by hub. After the analyzed process, the fault characteristic frequency could be extracted by the analyzed signals and compared with the signals from normal operating conditions......The angle faults of blades on wind turbines are usually included in the set angle fault and the pitch angle fault. They are occupied with a high proportion in all wind turbine faults. Compare with the traditional fault detection methods, using order tracking analysis method to detect angle faults...... has many advantages, such as easy implementation and high system reliability. Because of using Power Spectral Density method (PSD) or Fast Fourier Transform (FFT) method cannot get clear fault characteristic frequencies, this kind of faults should be detected by an effective method. This paper...

  9. NASA Space Flight Vehicle Fault Isolation Challenges

    Science.gov (United States)

    Bramon, Christopher; Inman, Sharon K.; Neeley, James R.; Jones, James V.; Tuttle, Loraine

    2016-01-01

    The Space Launch System (SLS) is the new NASA heavy lift launch vehicle and is scheduled for its first mission in 2017. The goal of the first mission, which will be uncrewed, is to demonstrate the integrated system performance of the SLS rocket and spacecraft before a crewed flight in 2021. SLS has many of the same logistics challenges as any other large scale program. Common logistics concerns for SLS include integration of discrete programs geographically separated, multiple prime contractors with distinct and different goals, schedule pressures and funding constraints. However, SLS also faces unique challenges. The new program is a confluence of new hardware and heritage, with heritage hardware constituting seventy-five percent of the program. This unique approach to design makes logistics concerns such as testability of the integrated flight vehicle especially problematic. The cost of fully automated diagnostics can be completely justified for a large fleet, but not so for a single flight vehicle. Fault detection is mandatory to assure the vehicle is capable of a safe launch, but fault isolation is another issue. SLS has considered various methods for fault isolation which can provide a reasonable balance between adequacy, timeliness and cost. This paper will address the analyses and decisions the NASA Logistics engineers are making to mitigate risk while providing a reasonable testability solution for fault isolation.

  10. Workflow Fault Tree Generation Through Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2014-01-01

    We present a framework for the automated generation of fault trees from models of realworld process workflows, expressed in a formalised subset of the popular Business Process Modelling and Notation (BPMN) language. To capture uncertainty and unreliability in workflows, we extend this formalism...... with probabilistic non-deterministic branching. We present an algorithm that allows for exhaustive generation of possible error states that could arise in execution of the model, where the generated error states allow for both fail-stop behaviour and continued system execution. We employ stochastic model checking...... to calculate the probabilities of reaching each non-error system state. Each generated error state is assigned a variable indicating its individual probability of occurrence. Our method can determine the probability of combined faults occurring, while accounting for the basic probabilistic structure...

  11. Chaos Synchronization Based Novel Real-Time Intelligent Fault Diagnosis for Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Chin-Tsung Hsieh

    2014-01-01

    Full Text Available The traditional solar photovoltaic fault diagnosis system needs two to three sets of sensing elements to capture fault signals as fault features and many fault diagnosis methods cannot be applied with real time. The fault diagnosis method proposed in this study needs only one set of sensing elements to intercept the fault features of the system, which can be real-time-diagnosed by creating the fault data of only one set of sensors. The aforesaid two points reduce the cost and fault diagnosis time. It can improve the construction of the huge database. This study used Matlab to simulate the faults in the solar photovoltaic system. The maximum power point tracker (MPPT is used to keep a stable power supply to the system when the system has faults. The characteristic signal of system fault voltage is captured and recorded, and the dynamic error of the fault voltage signal is extracted by chaos synchronization. Then, the extension engineering is used to implement the fault diagnosis. Finally, the overall fault diagnosis system only needs to capture the voltage signal of the solar photovoltaic system, and the fault type can be diagnosed instantly.

  12. Determination of propoxur in environmental samples by automated solid-phase extraction followed by flow-injection analysis with tris(2,2'-bipyridyl)ruthenium(II) chemiluminescence detection

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Ruiz, Tomas [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, 30071 Murcia (Spain)]. E-mail: tpr@um.es; Martinez-Lozano, Carmen [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, 30071 Murcia (Spain); Garcia, Maria Dolores [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, 30071 Murcia (Spain)

    2007-02-19

    A sensitive method for the analysis of propoxur in environmental samples has been developed. It involves an automated solid-phase extraction (SPE) procedure using a Gilson Aspec XLi and flow-injection analysis (FI) with chemiluminescence (CL) detection. The FI-CL system relies on the photolysis of propoxur by irradiation using a low-pressure mercury lamp (main spectral line 254 nm). The resultant methylamine is subsequently detected by CL using tris(2,2'-bipyridyl)ruthenium(III), which is on-line generated by photo-oxidation of the ruthenium(II) complex in the presence of peroxydisulfate. The linear concentration range of application was 0.05-5 {mu}g mL{sup -1} of propoxur, with a detection limit of 5 ng mL{sup -1}. The repeatability was 0.82% expressed as relative standard deviation (n = 10) and the reproducibility, studied on 5 consecutive days, was 2.1%. The sample throughput was 160 injection per hour. Propoxur residues below ng mL{sup -1} levels could be determined in environmental water samples when an SPE preconcentration device was coupled on-line with the FI system. This SPE-FI-CL arrangement provides a detection limit as low as 5 ng L{sup -1} using only 500 mL of sample. In the analysis of fruits and vegetables, the detection limit was about 10 {mu}g kg{sup -1}.

  13. Determination of propoxur in environmental samples by automated solid-phase extraction followed by flow-injection analysis with tris(2,2'-bipyridyl)ruthenium(II) chemiluminescence detection.

    Science.gov (United States)

    Pérez-Ruiz, Tomás; Martínez-Lozano, Carmen; García, María Dolores

    2007-02-19

    A sensitive method for the analysis of propoxur in environmental samples has been developed. It involves an automated solid-phase extraction (SPE) procedure using a Gilson Aspec XLi and flow-injection analysis (FI) with chemiluminescence (CL) detection. The FI-CL system relies on the photolysis of propoxur by irradiation using a low-pressure mercury lamp (main spectral line 254 nm). The resultant methylamine is subsequently detected by CL using tris(2,2'-bipyridyl)ruthenium(III), which is on-line generated by photo-oxidation of the ruthenium(II) complex in the presence of peroxydisulfate. The linear concentration range of application was 0.05-5 microg mL(-1) of propoxur, with a detection limit of 5 ng mL(-1). The repeatability was 0.82% expressed as relative standard deviation (n=10) and the reproducibility, studied on 5 consecutive days, was 2.1%. The sample throughput was 160 injection per hour. Propoxur residues below ng mL(-1) levels could be determined in environmental water samples when an SPE preconcentration device was coupled on-line with the FI system. This SPE-FI-CL arrangement provides a detection limit as low as 5 ng L(-1) using only 500 mL of sample. In the analysis of fruits and vegetables, the detection limit was about 10 microg kg(-1).

  14. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...

  15. Automated synthetic scene generation

    Science.gov (United States)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  16. Characterization of leaky faults

    International Nuclear Information System (INIS)

    Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs

  17. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri...

  18. Fault kinematic and Mesozoic paleo-stress evolution of the Hoop fault complex, Barents Sea

    Science.gov (United States)

    Etchebes, Marie; Athmer, Wiebke; Stueland, Eirik; Robertson, Sarah C.; Bounaim, Aicha; Steckhan, Dirk; Hellem Boe, Trond; Brenna, Trond; Sonneland, Lars; Reidar Granli, John

    2016-04-01

    The Hoop fault complex is an extensional fault system characterized by a series of multiscale half- and full-grabens trending NNE-SSW, NE-SW and E-W, and transfer zones striking ENE-WSW. In a joint collaboration between OMV Norge and Schlumberger Stavanger Research, the tectonic history of the Hoop area was assessed. A dense fault network was extracted from 3D seismic data using a novel workflow for mapping large and complex fault systems. The characterization of the fault systems was performed by integrating observations from (1) fault plane analysis, (2) geometrical shapes and crosscutting relationships of the different fault sets, (3) time-thickness maps, and (4) by establishing the relative timing of the tectonic events on key seismic lines orthogonal to the main fault strike azimuths. At least four successive extensional tectonic events affecting the Hoop fault complex have been identified in the Mesozoic. The first tectonic event is characterized by an Upper Triassic extensional event with an E-W trending maximum horizontal paleo-stress direction (Phase 1). This event led to new accommodation space established as a set of full-grabens. The grabens were orthogonally crosscut during the Middle Jurassic by a set of NNE-SSW striking grabens and half-grabens (Phase 2). Phase 3 was inferred from a set of E-W striking reactivated normal faults sealed by the Upper Jurassic-Lower Cretaceous sequence. In the Lower Cretaceous, the general trend of the maximum horizontal paleo-stress axis of Phase 2 rotates clockwise from NNE-SSW to NE-SW (Phase 4). This stress rotation induced the reactivation of Phase 2 and Phase 3 normal fault sets, producing west-dipping half-grabens/tilt-block systems and transtensional fault zones. A comparison between our results and the Mesozoic regional-scale tectonic events published for the Atlantic-Arctic region agrees with our reconstructed paleo-stress history. This implies that the Hoop fault complex is the result of far-field forces

  19. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE

    Directory of Open Access Journals (Sweden)

    T. Fang

    2014-07-01

    Full Text Available A variety of methods are used to measure the capability of particulate matter (PM to catalytically generate reactive oxygen species (ROS in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with the large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (Coefficient of Variation of 12% for standards, 4% for ambient samples, and reasonably low limit of detection (0.31 nmol min−1. Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9. The system was utilized for the Southeastern Center for Air Pollution and Epidemiology (SCAPE to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, road-side, and rural in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per air volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88, suggesting regional sources contributing to the PM oxidative potential in southeast US. However, the greater heterogeneity in the intrinsic DTT activity (per PM mass basis across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  20. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    Science.gov (United States)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2015-01-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (coefficient of variation of 15% for positive control, 4% for ambient samples) and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution & Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, roadside, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per-air-volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in the southeastern US. The correlation may also suggest a mechanistic explanation (oxidative stress) for observed PM2.5 mass-health associations. The heterogeneity in the intrinsic DTT activity (per-PM-mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the

  1. Manufacturing and automation

    Directory of Open Access Journals (Sweden)

    Ernesto Córdoba Nieto

    2010-04-01

    Full Text Available The article presents concepts and definitions from different sources concerning automation. The work approaches automation by virtue of the author’s experience in manufacturing production; why and how automation prolects are embarked upon is considered. Technological reflection regarding the progressive advances or stages of automation in the production area is stressed. Coriat and Freyssenet’s thoughts about and approaches to the problem of automation and its current state are taken and examined, especially that referring to the problem’s relationship with reconciling the level of automation with the flexibility and productivity demanded by competitive, worldwide manufacturing.

  2. Fault isolability conditions for linear systems with additive faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  3. 基于小波包的离心压缩机故障特征提取方法研究%Research on Fault Feature Extraction of Centrifugal Compressor Based on Wavelet Packet Method

    Institute of Scientific and Technical Information of China (English)

    史生霖; 陈长征; 李延斌; 张晶

    2013-01-01

    The fault diagnosis of centrifugal compressor is an important aspect of mechanical failure detect. The wavelet packet analysis method can get the faint and non-stationary vibration signals of the machine and fully reflect the fault information. Therefore, this method can provide an exact and efficient method for the fault diagnosis of centrifugal compressor.%离心压缩机故障诊断是机械故障检测中的一个重要方面。为了提取机械故障发出微弱的非平稳的振动信号,小波包分析法能够充分体现其故障信息。所以小波包分析法为离心压缩机故障诊断提供了一种准确有效的方法。

  4. Fault Analysis in Cryptography

    CERN Document Server

    Joye, Marc

    2012-01-01

    In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

  5. Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry

    Energy Technology Data Exchange (ETDEWEB)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford; Richard Rusaw

    2014-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe the distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.

  6. Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data

    Science.gov (United States)

    Hilley, G.E.; Delong, S.; Prentice, C.; Blisniuk, K.; Arrowsmith, J.R.

    2010-01-01

    Models of fault scarp morphology have been previously used to infer the relative age of different fault scarps in a fault zone using labor-intensive ground surveying. We present a method for automatically extracting scarp morphologic ages within high-resolution digital topography. Scarp degradation is modeled as a diffusive mass transport process in the across-scarp direction. The second derivative of the modeled degraded fault scarp was normalized to yield the best-fitting (in a least-squared sense) scarp height at each point, and the signal-to-noise ratio identified those areas containing scarp-like topography. We applied this method to three areas along the San Andreas Fault and found correspondence between the mapped geometry of the fault and that extracted by our analysis. This suggests that the spatial distribution of scarp ages may be revealed by such an analysis, allowing the recent temporal development of a fault zone to be imaged along its length.

  7. Latest Progress of Fault Detection and Localization in Complex Electrical Engineering

    Science.gov (United States)

    Zhao, Zheng; Wang, Can; Zhang, Yagang; Sun, Yi

    2014-01-01

    In the researches of complex electrical engineering, efficient fault detection and localization schemes are essential to quickly detect and locate faults so that appropriate and timely corrective mitigating and maintenance actions can be taken. In this paper, under the current measurement precision of PMU, we will put forward a new type of fault detection and localization technology based on fault factor feature extraction. Lots of simulating experiments indicate that, although there are disturbances of white Gaussian stochastic noise, based on fault factor feature extraction principal, the fault detection and localization results are still accurate and reliable, which also identifies that the fault detection and localization technology has strong anti-interference ability and great redundancy.

  8. Wavelet Transform and Neural Networks in Fault Diagnosis of a Motor Rotor

    Institute of Scientific and Technical Information of China (English)

    RONG Ming-xing

    2012-01-01

    In the motor fault diagnosis technique, vibration and stator current frequency components of detection are two main means. This article will discuss the signal detection method based on vibration fault. Because the motor vibration signal is a non-stationary random signal, fault signals often contain a lot of time-varying, burst proper- ties of ingredients. The traditional Fourier signal analysis can not effectively extract the motor fault characteristics, but are also likely to be rich in failure information but a weak signal as noise. Therefore, we introduce wavelet packet transforms to extract the fault characteristics of the signal information. Obtained was the result as the neural network input signal, using the L-M neural network optimization method for training, and then used the BP net- work for fault recognition. This paper uses Matlab software to simulate and confirmed the method of motor fault di- agnosis validity and accuracy

  9. Fuzzy fault diagnostic system based on fault tree analysis

    OpenAIRE

    Yang, Zong Xiao; Suzuki, Kazuhiko; Shimada, Yukiyasu; Sayama, Hayatoshi

    1995-01-01

    A method is presented for process fault diagnosis using information from fault tree analysis and uncertainty/imprecision of data. Fault tree analysis, which has been used as a method of system reliability/safety analysis, provides a procedure for identifying failures within a process. A fuzzy fault diagnostic system is constructed which uses the fuzzy fault tree analysis to represent a knowledge of the causal relationships in process operation and control system. The proposed method is applie...

  10. Development of a fully automated sequential injection solid-phase extraction procedure coupled to liquid chromatography to determine free 2-hydroxy-4-methoxybenzophenone and 2-hydroxy-4-methoxybenzophenone-5-sulphonic acid in human urine

    International Nuclear Information System (INIS)

    2-Hydroxy-4-methoxybenzophenone and 2-hydroxy-4-methoxybenzophenone-5-sulphonic acid, commonly known as benzophenone-3 (BZ3) and benzophenone-4 (BZ4), respectively, are substances widely used as UV filters in cosmetic products in order to absorb UV radiation and protect human skin from direct exposure to the deleterious wavelengths of sunlight. As with other UV filters, there is evidence of their percutaneous absorption. This work describes an analytical method developed to determine trace levels of free BZ3 and BZ4 in human urine. The methodology is based on a solid-phase extraction (SPE) procedure for clean-up and pre-concentration, followed by the monitoring of the UV filters by liquid chromatography-ultraviolet spectrophotometry detection (LC-UV). In order to improve not only the sensitivity and selectivity, but also the precision of the method, the principle of sequential injection analysis was used to automate the SPE process and to transfer the eluates from the SPE to the LC system. The application of a six-channel valve as an interface for the switching arrangements successfully allowed the on-line connection of SPE sample processing with LC analysis. The SPE process for BZ3 and BZ4 was performed using octadecyl (C18) and diethylaminopropyl (DEA) modified silica microcolumns, respectively, in which the analytes were retained and eluted selectively. Due to the matrix effects, the determination was based on standard addition quantification and was fully validated. The relative standard deviations of the results were 13% and 6% for BZ3 and BZ4, respectively, whereas the limits of detection were 60 and 30 ng mL-1, respectively. The method was satisfactorily applied to determine BZ3 and BZ4 in urine from volunteers that had applied a sunscreen cosmetic containing both UV filters.

  11. Configuration Management Automation (CMA)

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  12. Workflow automation architecture standard

    Energy Technology Data Exchange (ETDEWEB)

    Moshofsky, R.P.; Rohen, W.T. [Boeing Computer Services Co., Richland, WA (United States)

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  13. Development of an automated scoring system for plant comet assay

    Directory of Open Access Journals (Sweden)

    Bertrand Pourrut

    2015-05-01

    -\tnucleus density: increase the density of nuclei is of importance to increase scoring reliability (Sharma et al., 2012. In conclusion, increasing plant nucleus extraction yield and automated scoring of nuclei do represent big challenges. However, our promising preliminary results open up the perspective of an automated high-throughput scoring of plant nuclei.

  14. Fault Length Vs Fault Displacement Evaluation In The Case Of Cerro Prieto Pull-Apart Basin (Baja California, Mexico) Subsidence

    Science.gov (United States)

    Glowacka, E.; Sarychikhina, O.; Nava Pichardo, F. A.; Farfan, F.; Garcia Arthur, M. A.; Orozco, L.; Brassea, J.

    2013-05-01

    The Cerro Prieto pull-apart basin is located in the southern part of San Andreas Fault system, and is characterized by high seismicity, recent volcanism, tectonic deformation and hydrothermal activity (Lomnitz et al, 1970; Elders et al., 1984; Suárez-Vidal et al., 2008). Since the Cerro Prieto geothermal field production started, in 1973, significant subsidence increase was observed (Glowacka and Nava, 1996, Glowacka et al., 1999), and a relation between fluid extraction rate and subsidence rate has been suggested (op. cit.). Analysis of existing deformation data (Glowacka et al., 1999, 2005, Sarychikhina 2011) points to the fact that, although the extraction changes influence the subsidence rate, the tectonic faults control the spatial extent of the observed subsidence. Tectonic faults act as water barriers in the direction perpendicular to the fault, and/or separate regions with different compaction, and as effect the significant part of the subsidence is released as vertical displacement on the ground surface along fault rupture. These faults ruptures cause damages to roads and irrigation canals and water leakage. Since 1996, a network of geotechnical instruments has operated in the Mexicali Valley, for continuous recording of deformation phenomena. To date, the network (REDECVAM: Mexicali Valley Crustal Strain Measurement Array) includes two crackmeters and eight tiltmeters installed on, or very close to, the main faults; all instruments have sampling intervals in the 1 to 20 minutes range. Additionally, there are benchmarks for measuring vertical fault displacements for which readings are recorded every 3 months. Since the crackmeter measures vertical displacement on the fault at one place only, the question appears: can we use the crackmeter data to evaluate how long is the lenth of the fractured fault, and how quickly it grows, so we can know where we can expect fractures in the canals or roads? We used the Wells and Coppersmith (1994) relations between

  15. Image segmentation for automated dental identification

    Science.gov (United States)

    Haj Said, Eyad; Nassar, Diaa Eldin M.; Ammar, Hany H.

    2006-02-01

    Dental features are one of few biometric identifiers that qualify for postmortem identification; therefore, creation of an Automated Dental Identification System (ADIS) with goals and objectives similar to the Automated Fingerprint Identification System (AFIS) has received increased attention. As a part of ADIS, teeth segmentation from dental radiographs films is an essential step in the identification process. In this paper, we introduce a fully automated approach for teeth segmentation with goal to extract at least one tooth from the dental radiograph film. We evaluate our approach based on theoretical and empirical basis, and we compare its performance with the performance of other approaches introduced in the literature. The results show that our approach exhibits the lowest failure rate and the highest optimality among all full automated approaches introduced in the literature.

  16. Transformer fault diagnosis using continuous sparse autoencoder

    OpenAIRE

    Wang, Lukun; Zhao, Xiaoying; Pei, Jiangnan; Tang, Gongyou

    2016-01-01

    This paper proposes a novel continuous sparse autoencoder (CSAE) which can be used in unsupervised feature learning. The CSAE adds Gaussian stochastic unit into activation function to extract features of nonlinear data. In this paper, CSAE is applied to solve the problem of transformer fault recognition. Firstly, based on dissolved gas analysis method, IEC three ratios are calculated by the concentrations of dissolved gases. Then IEC three ratios data is normalized to reduce data singularity ...

  17. Fault diagnosis of rolling bearings based on multifractal detrended fluctuation analysis and Mahalanobis distance criterion

    Science.gov (United States)

    Lin, Jinshan; Chen, Qian

    2013-07-01

    Vibration data of faulty rolling bearings are usually nonstationary and nonlinear, and contain fairly weak fault features. As a result, feature extraction of rolling bearing fault data is always an intractable problem and has attracted considerable attention for a long time. This paper introduces multifractal detrended fluctuation analysis (MF-DFA) to analyze bearing vibration data and proposes a novel method for fault diagnosis of rolling bearings based on MF-DFA and Mahalanobis distance criterion (MDC). MF-DFA, an extension of monofractal DFA, is a powerful tool for uncovering the nonlinear dynamical characteristics buried in nonstationary time series and can capture minor changes of complex system conditions. To begin with, by MF-DFA, multifractality of bearing fault data was quantified with the generalized Hurst exponent, the scaling exponent and the multifractal spectrum. Consequently, controlled by essentially different dynamical mechanisms, the multifractality of four heterogeneous bearing fault data is significantly different; by contrast, controlled by slightly different dynamical mechanisms, the multifractality of homogeneous bearing fault data with different fault diameters is significantly or slightly different depending on different types of bearing faults. Therefore, the multifractal spectrum, as a set of parameters describing multifractality of time series, can be employed to characterize different types and severity of bearing faults. Subsequently, five characteristic parameters sensitive to changes of bearing fault conditions were extracted from the multifractal spectrum and utilized to construct fault features of bearing fault data. Moreover, Hilbert transform based envelope analysis, empirical mode decomposition (EMD) and wavelet transform (WT) were utilized to study the same bearing fault data. Also, the kurtosis and the peak levels of the EMD or the WT component corresponding to the bearing tones in the frequency domain were carefully checked

  18. Fault management for the Space Station Freedom control center

    Science.gov (United States)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  19. Teager Energy Spectrum for Fault Diagnosis of Rolling Element Bearings

    Science.gov (United States)

    Feng, Zhipeng; Wang, Tianjin; Zuo, Ming J.; Chu, Fulei; Yan, Shaoze

    2011-07-01

    Localized damage of rolling element bearings generates periodic impulses during running. The repeating frequency of impulses is a key indicator for diagnosing the localized damage of bearings. A new method, called Teager energy spectrum, is proposed to diagnose the faults of rolling element bearings. It exploits the unique advantages of Teager energy operator in detecting transient components in signals to extract periodic impulses of bearing faults, and uses the Fourier spectrum of Teager energy to identify the characteristic frequency of bearing faults. The effectiveness of the proposed method is validated by analyzing the experimental bearing vibration signals.

  20. Bevel Gearbox Fault Diagnosis using Vibration Measurements

    Directory of Open Access Journals (Sweden)

    Hartono Dennis

    2016-01-01

    Full Text Available The use of vibration measurementanalysis has been proven to be effective for gearbox fault diagnosis. However, the complexity of vibration signals observed from a gearbox makes it difficult to accurately detectfaults in the gearbox. This work is based on a comparative studyof several time-frequency signal processing methods that can be used to extract information from transient vibration signals containing useful diagnostic information. Experiments were performed on a bevel gearbox test rig using vibration measurements obtained from accelerometers. Initially, thediscrete wavelet transform was implementedfor vibration signal analysis to extract the frequency content of signal from the relevant frequency region. Several time-frequency signal processing methods werethen incorporated to extract the fault features of vibration signals and their diagnostic performances were compared. It was shown thatthe Short Time Fourier Transform (STFT could not offer a good time resolution to detect the periodicity of the faulty gear tooth due the difficulty in choosing an appropriate window length to capture the impulse signal. The Continuous Wavelet Transform (CWT, on the other hand, was suitable to detection of vibration transients generated by localized fault from a gearbox due to its multi-scale property. However, both methods still require a thorough visual inspection. In contrast, it was shown from the experiments that the diagnostic method using the Cepstrumanalysis could provide a direct indication of the faulty tooth without the need of a thorough visual inspection as required by CWT and STFT.

  1. Shoe-String Automation

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, M.L.

    2001-07-30

    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  2. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  3. Fault tolerant model predictive control of open channels

    OpenAIRE

    Horváth, Klaudia; Blesa Izquierdo, Joaquim; Duviella, Eric; Chuquet, Karine

    2014-01-01

    Automated control of water systems (irrigation canals, navigation canals, rivers etc.) relies on the measured data. The control action is calculated, in case of feedback controller, directly from the on-line measured data. If the measured data is corrupted, the calculated control action will have a different effect than it is desired. Therefore, it is crucial that the feedback controller receives good quality measurement data. On-line fault detection techniques can be applied in order to dete...

  4. Transformer Fault Diagnosis Based on Support Vector Machines%基于支持向量机的变压器故障诊断

    Institute of Scientific and Technical Information of China (English)

    刘义艳; 陈晨; 亢旭红; 巨永锋

    2011-01-01

    Due to lack of typical damage samples in the transformer fault diagnosis, a new fault diagnosis method based on support vector machines (SVMs) is presented. According to the method, the five characteristic gases dissolved in transformer oil are extracted by the K-means clustering (KMC) method as feature vectors, which are input into multi-classified SVMs for training, and then the SVMs diagnosis model is established to implement fault samples classification. The results of experiment and analysis show that with KMC algorithm, the diagnosis information are concentrated and the great time consumption in parameter determination is remitted effectively. The presented method can detect the faults in transformer with a high correct judgment rate and can reach the purpose of automation diagnosis for transformer faults under the condition of few samples.%针对变压器故障诊断中缺少实际典型故障样本的问题,提出了支持向量机(SVMs)变压器故障诊断方法.该方法采用K均值聚类(KMC)对变压器油中5种特征气体样本进行预选取作为特征向量,输入到多分类支持向量机中进行训练,建立SVMs诊断模型,实现对故障样本的诊断分类.实例分析表明,KMC算法浓缩了故障信息,有效地解决了确定模型参数时耗时巨大的问题.该方法在有限样本情况下,能够达到较高的故障正判率,满足变压器故障自动诊断的目的.

  5. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    Science.gov (United States)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  6. An Effective Strategy to Build Up a Balanced Test Suite for Spectrum-Based Fault Localization

    OpenAIRE

    Ning Li; Rui Wang; Yu-li Tian; Wei Zheng

    2016-01-01

    During past decades, many automated software faults diagnosis techniques including Spectrum-Based Fault Localization (SBFL) have been proposed to improve the efficiency of software debugging activity. In the field of SBFL, suspiciousness calculation is closely related to the number of failed and passed test cases. Studies have shown that the ratio of the number of failed and passed test case has more significant impact on the accuracy of SBFL than the total number of test cases, and a balance...

  7. Transient Fault Locating Method Based on Line Voltage and Zero-mode Current in Non-solidly Earthed Network

    Institute of Scientific and Technical Information of China (English)

    ZHANG Linli; XU Bingyin; XUE Yongduan; GAO Houlei

    2012-01-01

    Non-solidly earthed systems are widely used for middle voltage distribution network at home and abroad. Fault point location especially the single phase-to-earth fault is very difficult because the fault current is very weak and the fault arc is intermittent. Although several methods have been developed, the problem of fault location has not yet been resolved very well. A new fault location method based on transient component of line voltage and 0-mode current is presented in this paper, which can realize fault section location by the feeder automation (FA) system. Line voltage signal can be obtained conveniently without requiring any additional equipment. This method is based on transient information, not affected by arc suppression coil.

  8. Expert systems for Space Station automation

    Science.gov (United States)

    Georgeff, M. P.; Firschein, O.

    1985-01-01

    The expert systems required for automating key functions of the Manned Space Station (MSS) are explored. It is necessary that the expert systems developed be flexible, degrade gracefully in the case of a failure, and be able to work with incomplete data. The AI systems will have to perform interpretation and diagnosis, design, prediction and induction, and monitoring and control functions. Both quantitative and qualitative reasoning capabilities need improvements, as do automatic verification techniques, explanation and learning capabilities, and the use of metaknowledge, i. e., knowledge about the knowledge contained in the knowledge base. Information retrieval, fault isolation and manufacturing process control demonstrations are needed to validate expert systems for the MSS.

  9. Active Fault Isolation in MIMO Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    isolation is based directly on the input/output s ignals applied for the fault detection. It is guaranteed that the fault group includes the fault that had occurred in the system. The second step is individual fault isolation in the fault group . Both types of isolation are obtained by applying dedicated......Active fault isolation of parametric faults in closed-loop MIMO system s are considered in this paper. The fault isolation consists of two steps. T he first step is group- wise fault isolation. Here, a group of faults is isolated from other pos sible faults in the system. The group-wise fault...

  10. Fault Diagnosis for Electrical Distribution Systems using Structural Analysis

    DEFF Research Database (Denmark)

    Knüppel, Thyge; Blanke, Mogens; Østergaard, Jacob

    2014-01-01

    relations (ARR) are likely to change. The algorithms used for diagnosis may need to change accordingly, and finding efficient methods to ARR generation is essential to employ fault-tolerant methods in the grid. Structural analysis (SA) is based on graph-theoretical results, that offer to find analytic...... redundancies in large sets of equations only from the structure (topology) of the equations. A salient feature is automated generation of redundancy relations. The method is indeed feasible in electrical networks where circuit theory and network topology together formulate the constraints that define...... analysis of power systems, it demonstrates detection and isolation of failures in a network, and shows how typical faults are diagnosed. Nonlinear fault simulations illustrate the results....

  11. The effect of mechanical discontinuities on the growth of faults

    Science.gov (United States)

    Bonini, Lorenzo; Basili, Roberto; Bonanno, Emanuele; Toscani, Giovanni; Burrato, Pierfrancesco; Seno, Silvio; Valensise, Gianluca

    2016-04-01

    The growth of natural faults is controlled by several factors, including the nature of host rocks, the strain rate, the temperature, and the presence of fluids. In this work we focus on the mechanical characteristics of host rocks, and in particular on the role played by thin mechanical discontinuities on the upward propagation of faults and on associated secondary effects such as folding and fracturing. Our approach uses scaled, analogue models where natural rocks are simulated by wet clay (kaolin). A clay cake is placed above two rigid blocks in a hanging wall/footwall configuration on either side of a planar fault. Fault activity is simulated by motor-controlled movements of the hanging wall. We reproduce three types of faults: a 45°-dipping normal fault, a 45°-dipping reverse fault and a 30°-dipping reverse fault. These angles are selected as representative of most natural dip-slip faults. The analogues of the mechanical discontinuities are obtained by precutting the wet clay cake before starting the hanging wall movement. We monitor the experiments with high-resolution cameras and then obtain most of the data through the Digital Image Correlation method (D.I.C.). This technique accurately tracks the trajectories of the particles of the analogue material during the deformation process: this allows us to extract displacement field vectors plus the strain and shear rate distributions on the lateral side of the clay block, where the growth of new faults is best seen. Initially we run a series of isotropic experiments, i.e. experiments without discontinuities, to generate a reference model: then we introduce the discontinuities. For the extensional models they are cut at different dip angles, from horizontal to 45°-dipping, both synthetic and antithetic with respect to the master fault, whereas only horizontal discontinuities are introduced in the contractional models. Our experiments show that such discontinuities control: 1) the propagation rate of faults

  12. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  13. Sub-module Short Circuit Fault Diagnosis in Modular Multilevel Converter Based on Wavelet Transform and Adaptive Neuro Fuzzy Inference System

    DEFF Research Database (Denmark)

    Liu, Hui; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    by employing wavelet transform under different fault conditions. Then the fuzzy logic rules are automatically trained based on the fuzzified fault features to diagnose the different faults. Neither additional sensor nor the capacitor voltages are needed in the proposed method. The high accuracy, good...... for continuous operation and post-fault maintenance. In this article, a fault diagnosis technique is proposed for the short circuit fault in a modular multi-level converter sub-module using the wavelet transform and adaptive neuro fuzzy inference system. The fault features are extracted from output phase voltage...

  14. An intelligent fault identification method of rolling bearings based on LSSVM optimized by improved PSO

    Science.gov (United States)

    Xu, Hongbo; Chen, Guohua

    2013-02-01

    This paper presents an intelligent fault identification method of rolling bearings based on least squares support vector machine optimized by improved particle swarm optimization (IPSO-LSSVM). The method adopts a modified PSO algorithm to optimize the parameters of LSSVM, and then the optimized model could be established to identify the different fault patterns of rolling bearings. Firstly, original fault vibration signals are decomposed into some stationary intrinsic mode functions (IMFs) by empirical mode decomposition (EMD) method and the energy feature indexes extraction based on IMF energy entropy is analyzed in detail. Secondly, the extracted energy indexes serve as the fault feature vectors to be input to the IPSO-LSSVM classifier for identifying different fault patterns. Finally, a case study on rolling bearing fault identification demonstrates that the method can effectively enhance identification accuracy and convergence rate.

  15. Fault Management Assistant (FMA) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — S&K Aerospace (SKA) proposes to develop the Fault Management Assistant (FMA) to aid project managers and fault management engineers in developing better and...

  16. Automation for System Safety Analysis

    Science.gov (United States)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  17. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  18. Improving Multiple Fault Diagnosability using Possible Conflicts

    Data.gov (United States)

    National Aeronautics and Space Administration — Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can...

  19. Automated Periodontal Diseases Classification System

    Directory of Open Access Journals (Sweden)

    Aliaa A. A. Youssif

    2012-01-01

    Full Text Available This paper presents an efficient and innovative system for automated classification of periodontal diseases, The strength of our technique lies in the fact that it incorporates knowledge from the patients' clinical data, along with the features automatically extracted from the Haematoxylin and Eosin (H&E stained microscopic images. Our system uses image processing techniques based on color deconvolution, morphological operations, and watershed transforms for epithelium & connective tissue segmentation, nuclear segmentation, and extraction of the microscopic immunohistochemical features for the nuclei, dilated blood vessels & collagen fibers. Also, Feedforward Backpropagation Artificial Neural Networks are used for the classification process. We report 100% classification accuracy in correctly identifying the different periodontal diseases observed in our 30 samples dataset.

  20. Investigating multiple fault rupture at the Salar del Carmen segment of the Atacama Fault System (northern Chile): Fault scarp morphology and knickpoint analysis

    Science.gov (United States)

    Ewiak, Oktawian; Victor, Pia; Oncken, Onno

    2015-02-01

    This study presents a new geomorphological approach to investigate the past activity and potential seismic hazard of upper crustal faults at the Salar del Carmen segment of the Atacama Fault System in the northern Chile forearc. Our contribution is based on the analysis of a large set of topographic profiles and allows extrapolating fault analysis from a few selected locations to distances of kilometers along strike of the fault. We detected subtle changes in the fault scarp geometry which may represent the number of paleoearthquakes experienced by the structure and extracted the cumulative and last incremental displacement along strike of the investigated scarps. We also tested the potential of knickpoints in channels crossing the fault scarps as markers for repeated fault rupture and proxies for seismic displacement. The number of paleoearthquakes derived from our analysis is 2-3, well in agreement with recent paleoseismological investigations, which suggest 2-3 earthquakes (Mw = 6.5-6.7) at the studied segments. Knickpoints record the number of events for about 55% of the analyzed profile pairs. Only few knickpoints represent the full seismic displacement, while most retain only a fraction of the displacement. The along-strike displacement distributions suggest fault growth from the center toward the tips and linkage of individual ruptures. Our approach also improves the estimation of paleomagnitudes in case of multiple fault rupture by allowing to quantify the last increment of displacement separately. Paleomagnitudes calculated from total segment length and the last increment of displacement (Mw = 6.5-7.1) are in agreement with paleoseismological results.

  1. The property of fault zone and fault activity of Shionohira Fault, Fukushima, Japan

    Science.gov (United States)

    Seshimo, K.; Aoki, K.; Tanaka, Y.; Niwa, M.; Kametaka, M.; Sakai, T.; Tanaka, Y.

    2015-12-01

    The April 11, 2011 Fukushima-ken Hamadori Earthquake (hereafter the 4.11 earthquake) formed co-seismic surface ruptures trending in the NNW-SSE direction in Iwaki City, Fukushima Prefecture, which were newly named as the Shionohira Fault by Ishiyama et al. (2011). This earthquake was characterized by a westward dipping normal slip faulting, with a maximum displacement of about 2 m (e.g., Kurosawa et al., 2012). To the south of the area, the same trending lineaments were recognized to exist even though no surface ruptures occurred by the earthquake. In an attempt to elucidate the differences of active and non-active segments of the fault, this report discusses the results of observation of fault outcrops along the Shionohira Fault as well as the Coulomb stress calculations. Only a few outcrops have basement rocks of both the hanging-wall and foot-wall of the fault plane. Three of these outcrops (Kyodo-gawa, Shionohira and Betto) were selected for investigation. In addition, a fault outcrop (Nameishi-minami) located about 300 m south of the southern tip of the surface ruptures was investigated. The authors carried out observations of outcrops, polished slabs and thin sections, and performed X-ray diffraction (XRD) to fault materials. As a result, the fault zones originating from schists were investigated at Kyodo-gawa and Betto. A thick fault gouge was cut by a fault plane of the 4.11 earthquake in each outcrop. The fault materials originating from schists were fault bounded with (possibly Neogene) weakly deformed sandstone at Shionohira. A thin fault gouge was found along the fault plane of 4.11 earthquake. A small-scale fault zone with thin fault gouge was observed in Nameishi-minami. According to XRD analysis, smectite was detected in the gouges from Kyodo-gawa, Shionohira and Betto, while not in the gouge from Nameishi-minami.

  2. Causes of automotive turbocharger faults

    OpenAIRE

    Jan FILIPCZYK

    2013-01-01

    This paper presents the results of examinations of turbocharger damages. The analysis of the causes of faults in 100 engines with turbochargers of cars, buses and trucks has been carried out. The incidence and structure of turbocharged engine faults has been compared to the causes of faults of naturally aspirated engines. The cause of damage, the possibility of early detection, the time between overhaul and the impact on engine operation for each case of fault was carried out as well. The re...

  3. Heat reveals faults

    Energy Technology Data Exchange (ETDEWEB)

    Weinreich, Bernhard [Solarschmiede GmbH, Muenchen (Germany). Engineering Dept.

    2010-07-01

    Gremlins cannot hide from the all-revealing view of a thermographic camera, whereby it makes no difference whether it is a roof-mounted system or a megawatt-sized farm. Just as diverse are the range of faults that, with the growing level of expertise, can now be detected and differentiated with even greater detail. (orig.)

  4. Network Power Fault Detection

    OpenAIRE

    Siviero, Claudio

    2013-01-01

    Network power fault detection. At least one first network device is instructed to temporarily disconnect from a power supply path of a network, and at least one characteristic of the power supply path of the network is measured at a second network device connected to the network while the at least one first network device is temporarily disconnected from the network

  5. Tacting "To a Fault."

    Science.gov (United States)

    Baer, Donald M.

    1991-01-01

    This paper argues that behavior analysis is not technological to a fault, but rather has a faulty technology by being incomplete. The paper examines reinforcers and punishers that result from the outcomes of either (1) striving for better experimental control, or (2) inventing theories to explain why current control is imperfect. (JDD)

  6. Transformer fault diagnosis using continuous sparse autoencoder.

    Science.gov (United States)

    Wang, Lukun; Zhao, Xiaoying; Pei, Jiangnan; Tang, Gongyou

    2016-01-01

    This paper proposes a novel continuous sparse autoencoder (CSAE) which can be used in unsupervised feature learning. The CSAE adds Gaussian stochastic unit into activation function to extract features of nonlinear data. In this paper, CSAE is applied to solve the problem of transformer fault recognition. Firstly, based on dissolved gas analysis method, IEC three ratios are calculated by the concentrations of dissolved gases. Then IEC three ratios data is normalized to reduce data singularity and improve training speed. Secondly, deep belief network is established by two layers of CSAE and one layer of back propagation (BP) network. Thirdly, CSAE is adopted to unsupervised training and getting features. Then BP network is used for supervised training and getting transformer fault. Finally, the experimental data from IEC TC 10 dataset aims to illustrate the effectiveness of the presented approach. Comparative experiments clearly show that CSAE can extract features from the original data, and achieve a superior correct differentiation rate on transformer fault diagnosis. PMID:27119052

  7. Transformer fault diagnosis using continuous sparse autoencoder.

    Science.gov (United States)

    Wang, Lukun; Zhao, Xiaoying; Pei, Jiangnan; Tang, Gongyou

    2016-01-01

    This paper proposes a novel continuous sparse autoencoder (CSAE) which can be used in unsupervised feature learning. The CSAE adds Gaussian stochastic unit into activation function to extract features of nonlinear data. In this paper, CSAE is applied to solve the problem of transformer fault recognition. Firstly, based on dissolved gas analysis method, IEC three ratios are calculated by the concentrations of dissolved gases. Then IEC three ratios data is normalized to reduce data singularity and improve training speed. Secondly, deep belief network is established by two layers of CSAE and one layer of back propagation (BP) network. Thirdly, CSAE is adopted to unsupervised training and getting features. Then BP network is used for supervised training and getting transformer fault. Finally, the experimental data from IEC TC 10 dataset aims to illustrate the effectiveness of the presented approach. Comparative experiments clearly show that CSAE can extract features from the original data, and achieve a superior correct differentiation rate on transformer fault diagnosis.

  8. Fault-Related Sanctuaries

    Science.gov (United States)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  9. Fault detection and fault-tolerant control using sliding modes

    CERN Document Server

    Alwi, Halim; Tan, Chee Pin

    2011-01-01

    ""Fault Detection and Fault-tolerant Control Using Sliding Modes"" is the first text dedicated to showing the latest developments in the use of sliding-mode concepts for fault detection and isolation (FDI) and fault-tolerant control in dynamical engineering systems. It begins with an introduction to the basic concepts of sliding modes to provide a background to the field. This is followed by chapters that describe the use and design of sliding-mode observers for FDI using robust fault reconstruction. The development of a class of sliding-mode observers is described from first principles throug

  10. Study of Fault Diagnosis Method for Wind Turbine with Decision Classification Algorithms and Expert System

    Directory of Open Access Journals (Sweden)

    Feng Yongxin

    2012-09-01

    Full Text Available Study on the fault diagnosis method through the combination of decision classification algorithms and expert system. The method of extracting diagnosis rules with the CTree software was given, and a fault diagnosis system based on CLIPS was developed. In order to verify the feasibility of the method, at first the sample data was got through the simulations under fault of direct-drive wind turbine and gearbox, then the diagnosis rules was extracted with the CTree software, at last the fault diagnosis system proposed and the rules was used to extracted to diagnose the fault simulated. Test results showed that the misdiagnosis rate both within 5%, thus the feasibility of the method was verified.

  11. Fault Diagnosis of Batch Reactor Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Sujatha Subramanian

    2014-01-01

    Full Text Available Fault diagnosis of a batch reactor gives the early detection of fault and minimizes the risk of thermal runaway. It provides superior performance and helps to improve safety and consistency. It has become more vital in this technical era. In this paper, support vector machine (SVM is used to estimate the heat release (Qr of the batch reactor both normal and faulty conditions. The signature of the residual, which is obtained from the difference between nominal and estimated faulty Qr values, characterizes the different natures of faults occurring in the batch reactor. Appropriate statistical and geometric features are extracted from the residual signature and the total numbers of features are reduced using SVM attribute selection filter and principle component analysis (PCA techniques. artificial neural network (ANN classifiers like multilayer perceptron (MLP, radial basis function (RBF, and Bayes net are used to classify the different types of faults from the reduced features. It is observed from the result of the comparative study that the proposed method for fault diagnosis with limited number of features extracted from only one estimated parameter (Qr shows that it is more efficient and fast for diagnosing the typical faults.

  12. Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine.

    Science.gov (United States)

    Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin

    2016-01-01

    This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox. PMID:26848665

  13. 基于PS-InSAR技术的断裂带近场变形特征提取%The extraction of the near-field deformation features along the faulted zone based on PS-InSAR survey

    Institute of Scientific and Technical Information of China (English)

    李凌婧; 姚鑫; 张永双; 王桂杰; 郭长宝

    2015-01-01

    Aperture Radar)technology and using L band data, the authors conducted the survey of near-field deformation around Bamei-Daofu section of Xianshuihe ac⁃tive fault from 2007 to 2011 and, based on analysis in combination with other materials, inferred some complex fault near-field defor⁃mation information:①the deformation velocity of the north section is larger than that of the north section, and velocities on the two sides of the fault are somewhat different from each other, the velocity of SW wall is large than that of NE wall, the velocity difference of the far-field is more significant, and the velocity of the near-field is feeble; ②in area close to the active faulted zone, the values of PS(Persistent Scatterer)points deformation velocities are mainly comparatively small negative and positive values, reflecting the sur⁃ face ascent and suggesting that the location is composed mainly of wet land, exposed point of ground water, bank and gully. It is in⁃ferred that these phenomena are attributed to surface bulging and deformation caused by weather warming—glaciers melting—uplift of ground water level, the tendency uplift of wet land resulting from seasonal frost heaving, and certain expansibility of cataclastic rock and soil near the faulted zone;③the uplift deformation around Zhonggu-Bamei section results from the thrust movement near Xianshuihe fault, and the ductile shear zone absorbs and coordinates the entire block deformation; ④high deformation PS blocks re⁃flect the slope gravity deformation,especially in sections of Daofu-shonglinkou and Qianning basin-Longdengba, revealing the geo⁃hazard effects of the fault; ⑤the precise PS-InSAR results show that the deformation of the fault is complex and shows remarkable differences in different sections, different periods and different tectonic locations, so we can't simply consider the movement to be overall translation or elevation-subsidence with the faulted zone as the boundary.

  14. Landscape response to normal fault growth and linkage in the Southern Apennines, Italy.

    Science.gov (United States)

    Roda-Boluda, Duna; Whittaker, Alex

    2016-04-01

    It is now well-established that landscape can record spatial and temporal variations in tectonic rates. However, decoding this information to extract detailed histories of fault growth is often a complex problem that requires careful integration of tectonic and geomorphic data sets. Here, we present new data addressing both normal fault evolution and coupled landscape response for two normal faults in the Southern Apennines: the Vallo di Diano and East Agri faults. By integrating published constraints with new data, we show that these faults have total throws of up to 2100 m, and Holocene throw rates of up to 1 mm/yr at their maximum. We demonstrate that geomorphology is effectively recording tectonics, with relief, channel and catchment slopes varying along fault strike as normal fault activity does. Therefore, valuable information about fault growth and interaction can be extracted from their geomorphic expression. We use the spatial distribution of knickpoints on the footwall channels to infer two episodes of base level change, which can be associated with distinct fault interaction events. From our detailed fault throw profiles, we reconstruct the amount of throw accumulated after each of these events, and the segments involved in each, and we use slip rate enhancement factors derived from fault interaction theory to estimate the magnitude of the tectonic perturbation in each case. From this approach, we are able to reconstruct pre-linkage throw rates, and we estimate that fault linkage events likely took place 0.7 ± 0.2 Ma and 1.9 ± 0.6 Ma in the Vallo di Diano fault, and 1.1 ± 0.1 and 2.3 ± 0.9 Ma in the East Agri fault. Our study suggests that both faults started their activity at 3.6 ± 0.5 Ma. These fault linkage scenarios are consistent with the knickpoint heights, and may relate to soft-linkage interaction with the Southern Apennines normal fault array, the existence of which has been the subject of considerable debate. Our combined geomorphic and

  15. Fault tolerant architecture for artificial olfactory system

    International Nuclear Information System (INIS)

    In this paper, to cover and mask the faults that occur in the sensing unit of an artificial olfactory system, a novel architecture is offered. The proposed architecture is able to tolerate failures in the sensors of the array and the faults that occur are masked. The proposed architecture for extracting the correct results from the output of the sensors can provide the quality of service for generated data from the sensor array. The results of various evaluations and analysis proved that the proposed architecture has acceptable performance in comparison with the classic form of the sensor array in gas identification. According to the results, achieving a high odor discrimination based on the suggested architecture is possible. (paper)

  16. Mechanical Fault Diagnosis Using Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    LI Ling-jun; ZHANG Zhou-suo; HE Zheng-jia

    2003-01-01

    The Support Vector Machine (SVM) is a machine learning algorithm based on the Statistical Learning Theory ( SLT) , which can get good classification effects even with a few learning samples. SVM represents a new approach to pattern classification and has been shown to be particularly successful in many fields such as image identification and face recognition. It also provides us with a new method to develop intelligent fault diagnosis. This paper presents a SVM-based approach for fault diagnosis of rolling bearings. Experimentation with vibration signals of bearings is conducted. The vibration signals acquired from the bearings are used directly in the calculating without the preprocessing of extracting its features. Compared with the methods based on Artificial Neural Network (ANN), the SVM-based meth-od has desirable advantages. It is applicable for on-line diagnosis of mechanical systems.

  17. Fault tolerant architecture for artificial olfactory system

    Science.gov (United States)

    Lotfivand, Nasser; Nizar Hamidon, Mohd; Abdolzadeh, Vida

    2015-05-01

    In this paper, to cover and mask the faults that occur in the sensing unit of an artificial olfactory system, a novel architecture is offered. The proposed architecture is able to tolerate failures in the sensors of the array and the faults that occur are masked. The proposed architecture for extracting the correct results from the output of the sensors can provide the quality of service for generated data from the sensor array. The results of various evaluations and analysis proved that the proposed architecture has acceptable performance in comparison with the classic form of the sensor array in gas identification. According to the results, achieving a high odor discrimination based on the suggested architecture is possible.

  18. Automatic software fault localization based on ar tificial bee colony

    Institute of Scientific and Technical Information of China (English)

    Linzhi Huang∗; Jun Ai

    2015-01-01

    Software debugging accounts for a vast majority of the financial and time costs in software developing and maintenance. Thus, approaches of software fault localization that can help au-tomate the debugging process have become a hot topic in the field of software engineering. Given the great demand for software fault localization, an approach based on the artificial bee colony (ABC) algorithm is proposed to be integrated with other related techniques. In this process, the source program is initial y instru-mented after analyzing the dependence information. The test case sets are then compiled and run on the instrumented program, and execution results are input to the ABC algorithm. The algorithm can determine the largest fitness value and best food source by calculating the average fitness of the employed bees in the iter-ative process. The program unit with the highest suspicion score corresponding to the best test case set is regarded as the final fault localization. Experiments are conducted with the TCAS program in the Siemens suite. Results demonstrate that the proposed fault localization method is effective and efficient. The ABC algorithm can efficiently avoid the local optimum, and ensure the validity of the fault location to a larger extent.

  19. Analysis of fault using microcomputer protection by symmetrical component method

    Directory of Open Access Journals (Sweden)

    Mr. Ashish Choubey

    2012-09-01

    Full Text Available To enhance power supply reliability for the userterminals in the case of the distribution system toavoid interference by the fault again, rapidlycomplete the automatic identification, positioning,automatic fault isolation, network reconfigurationuntil the resumption of supply of non-fault section,a microprocessor-based relay protection device hasdeveloped. As the fault component theory is widelyused in microcomputer protection, and faultcomponent exists in the network of faultcomponent, it is necessary to build up the faultcomponent network when short circuit faultemerging and to draw the current and voltagecomponent phasor diagram at fault point. In orderto understand microcomputer protection based onthe symmetrical component principle, we obtainedthe sequence current and sequence voltageaccording to the concept of symmetrical component.Distribution line directly to user-oriented powersupply, the reliability of its operation determines thequality and level of electricity supply. In recentdecades, because of the general power of the tirelessefforts of scientists and technicians, relay protectiontechnology and equipment application level hasbeen greatly improved, but the current domesticproduction of computer hardware, protectiondevices are still outdated systems. Softwaredevelopment has maintenance difficulties and shortsurvival time. With the factory automation systeminterface functions weak points, the networkcommunication cannot meet the actualrequirements. Protection principle configurationand device manufacturing process to be improvedand so on.

  20. Bearing fault diagnosis based on spectrum images of vibration signals

    International Nuclear Information System (INIS)

    Bearing fault diagnosis has been a challenge in the monitoring activities of rotating machinery, and it’s receiving more and more attention. The conventional fault diagnosis methods usually extract features from the waveforms or spectrums of vibration signals in order to correctly classify faults. In this paper, a novel feature in the form of images is presented, namely analysis of the spectrum images of vibration signals. The spectrum images are simply obtained by doing fast Fourier transformation. Such images are processed with two-dimensional principal component analysis (2DPCA) to reduce the dimensions, and then a minimum distance method is applied to classify the faults of bearings. The effectiveness of the proposed method is verified with experimental data. (paper)

  1. Nonlinear fault diagnosis method based on kernel principal component analysis

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu; Zhang Chunkai; Shao Huihe

    2005-01-01

    To ensure the system run under working order, detection and diagnosis of faults play an important role in industrial process. This paper proposed a nonlinear fault diagnosis method based on kernel principal component analysis (KPCA). In proposed method, using essential information of nonlinear system extracted by KPCA, we constructed KPCA model of nonlinear system under normal working condition. Then new data were projected onto the KPCA model. When new data are incompatible with the KPCA model, it can be concluded that the nonlinear system isout of normal working condition. Proposed method was applied to fault diagnosison rolling bearings. Simulation results show proposed method provides an effective method for fault detection and diagnosis of nonlinear system.

  2. Sensor fault diagnosis with a probabilistic decision process

    Science.gov (United States)

    Sharifi, Reza; Langari, Reza

    2013-01-01

    In this paper a probabilistic approach to sensor fault diagnosis is presented. The proposed method is applicable to systems whose dynamic can be approximated with only few active states, especially in process control where we usually have a relatively slow dynamics. Unlike most existing probabilistic approaches to fault diagnosis, which are based on Bayesian Belief Networks, in this approach the probabilistic model is directly extracted from a parity equation. The relevant parity equation can be found using a model of the system or through principal component analysis of data measured from the system. In addition, a sensor detectability index is introduced that specifies the level of detectability of sensor faults in a set of analytically redundant sensors. This index depends only on the internal relationships of the variables of the system and noise level. The method is tested on a model of the Tennessee Eastman process and the result shows a fast and reliable prediction of fault in the detectable sensors.

  3. Road Features Extraction Using Terrestrial Mobile Laser Scanning System

    OpenAIRE

    Kumar, Pankaj

    2012-01-01

    In this thesis, we present the experimental research and key contributions we have made in the field of road feature extraction from LiDAR data. We detail the development of three automated algorithms for the extraction of road features from terrestrial mobile LiDAR data. LiDAR data is a rich source of 3D geo-referenced information whose volume and scale have inhibited the development of automated algorithms. Automated feature extraction algorithms enable the wider geospatia...

  4. Noise resistant time frequency analysis and application in fault diagnosis of rolling element bearings

    Science.gov (United States)

    Dong, Guangming; Chen, Jin

    2012-11-01

    Rolling element bearings are frequently used in rotary machinery, but they are also fragile mechanical parts. Hence, exact condition monitoring and fault diagnosis for them plays an important role in ensuring machinery's reliable running. Timely diagnosis of early bearing faults is desirable, but the early fault signatures are easily submerged in noise. In this paper, Wigner-Ville spectrum based on cyclic spectral density (CSWVS for a brief notation) is studied, which is able to represent the cyclostationary signals while reducing the masking effect of additive stationary noise. Both simulations and experiments show that CSWVS is a noise resistant time frequency analysis technique for extracting bearing fault patterns, when bearing signals are under influences of random noise and gear vibrations. The 3-D feature of the CSWVS is proved useful in extracting bearing fault pattern from gearbox vibration signals, where bearing signals are affected by gear meshing vibration and noise. Besides, CSWVS utilizes the second order cyclostationary property of the vibration signals produced by bearing distributed fault, and clearly extracts its fault features, which cannot be extracted by envelope analysis. To quantitatively describe the extent of bearing fault, Renyi information encoded in the time frequency diagram of CSWVS is studied. It is shown to be a more sensitive index to reflect bearing performance degradation, compared with the spectral entropy (SE), squared envelope spectrum entropy (SESE) and Renyi informations for WVD, PWVD, especially when SNR is low.

  5. Using the EMD method to determine fault criterion for medium-low pressure gas regulators

    Science.gov (United States)

    Hao, Xuejun; Liu, Qiang; Yang, Guobin; Du, Yi

    2015-11-01

    By extracting the outlet pressure data of gas regulators, this paper uses the EMD toolbox of the MATLAB software, which can perform data decomposition and the Hilbert-Huang Transform to find the rules with fault data. Eventually, the medium-low pressure gas regulator fault criterion can be established.

  6. Research on Gear-broken Fault Diagnosis in a Tank Gearbox

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A fault diagnosis method of working position gear in a tank gearbox is put forward based on simulating the fault of working position gear in an actual tank, extracting the envelope of vibration signal by Hilbert transformation amplitude demodulation method, and zooming the low-frequency band to envelope signal.

  7. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  8. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and support

  9. Work and Programmable Automation.

    Science.gov (United States)

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  10. Library Automation Style Guide.

    Science.gov (United States)

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  11. Automation in immunohematology.

    Science.gov (United States)

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-07-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  12. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  13. Automation in immunohematology.

    Science.gov (United States)

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-07-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process. PMID:22988378

  14. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  15. Advances in inspection automation

    Science.gov (United States)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  16. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  17. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    Science.gov (United States)

    Padilla, Peter A.

    1991-03-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  18. Fault structure and deformation rates at the Lastros-Sfaka Graben, Crete

    Science.gov (United States)

    Mason, J.; Schneiderwind, S.; Pallikarakis, A.; Wiatr, T.; Mechernich, S.; Papanikolaou, I.; Reicherter, K.

    2016-06-01

    The Lastros and Sfaka faults have an antithetic relationship and form a ca. 2 km wide graben within the Ierapetra fault zone in eastern Crete. Both faults have impressive bedrock fault scarps many metres in height which form prominent features within the landscape. t-LiDAR investigations undertaken on the Lastros fault are used to accurately determine vertical displacements along a ca. 1.3 km long scanned segment. Analyses show that previous estimations of post glacial slip rate are too high because there are many areas along strike where the scarp is exhumed by natural erosion and/or anthropogenic activity. In areas not affected by erosion there is mean scarp height of 9.4 m. This leads to a slip rate of 0.69 ± 0.15 mm/a using 15 ± 3 ka for scarp exhumation. Using empirical calculations the expected earthquake magnitudes and displacement per event are discussed based on our observations. Trenching investigations on the Sfaka fault identify different generations of fissure fills. Retrodeformation analyses and 14C dating of the fill material indicate at least four events dating back to 16,055 ± 215 cal BP, with the last event having occurred soon after 6102 ± 113 cal BP. The Lastros fault is likely the controlling fault in the graben, and ruptures on the Lastros fault will sympathetically affect the Sfaka fault, which merges with the Lastros fault at a depth of 2.4 km. The extracted dates from the Sfaka fault fissure fills therefore either represent activity on the Lastros fault, assuming they formed coseismically, or accommodation events. Cross sections show that the finite throw is limited to around 300 m, and the derived slip rate for the Lastros fault therefore indicates that both faults are relatively young having initiated 435 ± 120 ka.

  19. A solution for applying IEC 61499 function blocks in the development of substation automation systems

    OpenAIRE

    Vlad, Valentin; Popa, Cezar D.; Turcu, Corneliu O.; Buzduga, Corneliu

    2015-01-01

    This paper presents a solution for applying IEC 61499 function blocks along with IEC 61850 specifications in modeling and implementing control applications for substations automation. The IEC 61499 artifacts are used for structuring the control logic, while the IEC 61850 concepts for communication and information exchange between the automation devices. The proposed control architecture was implemented and validated in a simple fault protection scenario with simulated power equipment.

  20. Managing Fault Management Development

    Science.gov (United States)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  1. Study of Intelligent Photovoltaic System Fault Diagnostic Scheme Based on Chaotic Signal Synchronization

    Directory of Open Access Journals (Sweden)

    Chin-Tsung Hsieh

    2013-01-01

    Full Text Available As the photovoltaic system consists of many equipment components, manual inspection will be very costly. This study proposes the photovoltaic system fault diagnosis based on chaotic signal synchronization. First, MATLAB was used to simulate the fault conditions of solar system, and the maximum power point tracking (MPPT was used to ensure the system's stable power and capture and record the system fault feature signals. The dynamic errors of various fault signals were extracted by chaotic signal synchronization, and the dynamic error data of various fault signals were recorded completely. In the photovoltaic system, the captured output voltage signal was used as the characteristic values for fault recognition, and the extension theory was used to create the joint domain and classical domain of various fault conditions according to the collected feature data. The matter-element model of extension engineering was constructed. Finally, the whole fault diagnosis system is only needed to capture the voltage signal of the solar photovoltaic system, so as to know the exact fault condition effectively and rapidly. The proposed fault diagnostor can be implemented by embedded system and be combined with ZigBee wireless network module in the future, thus reducing labor cost and building a complete portable renewable energy system fault diagnostor.

  2. Automated addition of Chelex solution to tubes containing trace items

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hansen, Thomas Møller; Hansen, Anders Johannes;

    2011-01-01

    Extraction of DNA from trace items for forensic genetic DNA typing using a manual Chelex based extraction protocol requires addition of Chelex solution to sample tubes containing trace items. Automated of addition of Chelex solution may be hampered by high viscosity of the solution and fast...

  3. Literature classification for semi-automated updating of biological knowledgebases

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Winther, Ole;

    2013-01-01

    abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature. Conclusion: We here propose a conceptual framework for semi-automated extraction of epitope data embedded in scientific literature using principles from text mining and...

  4. A PC based time domain reflectometer for space station cable fault isolation

    Science.gov (United States)

    Pham, Michael; McClean, Marty; Hossain, Sabbir; Vo, Peter; Kouns, Ken

    1994-01-01

    Significant problems are faced by astronauts on orbit in the Space Station when trying to locate electrical faults in multi-segment avionics and communication cables. These problems necessitate the development of an automated portable device that will detect and locate cable faults using the pulse-echo technique known as Time Domain Reflectometry. A breadboard time domain reflectometer (TDR) circuit board was designed and developed at the NASA-JSC. The TDR board works in conjunction with a GRiD lap-top computer to automate the fault detection and isolation process. A software program was written to automatically display the nature and location of any possible faults. The breadboard system can isolate open circuit and short circuit faults within two feet in a typical space station cable configuration. Follow-on efforts planned for 1994 will produce a compact, portable prototype Space Station TDR capable of automated switching in multi-conductor cables for high fidelity evaluation. This device has many possible commercial applications, including commercial and military aircraft avionics, cable TV, telephone, communication, information and computer network systems. This paper describes the principle of time domain reflectometry and the methodology for on-orbit avionics utility distribution system repair, utilizing the newly developed device called the Space Station Time Domain Reflectometer (SSTDR).

  5. Sparsity-based algorithm for detecting faults in rotating machines

    Science.gov (United States)

    He, Wangpeng; Ding, Yin; Zi, Yanyang; Selesnick, Ivan W.

    2016-05-01

    This paper addresses the detection of periodic transients in vibration signals so as to detect faults in rotating machines. For this purpose, we present a method to estimate periodic-group-sparse signals in noise. The method is based on the formulation of a convex optimization problem. A fast iterative algorithm is given for its solution. A simulated signal is formulated to verify the performance of the proposed approach for periodic feature extraction. The detection performance of comparative methods is compared with that of the proposed approach via RMSE values and receiver operating characteristic (ROC) curves. Finally, the proposed approach is applied to single fault diagnosis of a locomotive bearing and compound faults diagnosis of motor bearings. The processed results show that the proposed approach can effectively detect and extract the useful features of bearing outer race and inner race defect.

  6. Cyclostationary Analysis for Gearbox and Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhipeng Feng

    2015-01-01

    Full Text Available Gearbox and rolling element bearing vibration signals feature modulation, thus being cyclostationary. Therefore, the cyclic correlation and cyclic spectrum are suited to analyze their modulation characteristics and thereby extract gearbox and bearing fault symptoms. In order to thoroughly understand the cyclostationarity of gearbox and bearing vibrations, the explicit expressions of cyclic correlation and cyclic spectrum for amplitude modulation and frequency modulation (AM-FM signals are derived, and their properties are summarized. The theoretical derivations are illustrated and validated by gearbox and bearing experimental signal analyses. The modulation characteristics caused by gearbox and bearing faults are extracted. In faulty gearbox and bearing cases, more peaks appear in cyclic correlation slice of 0 lag and cyclic spectrum, than in healthy cases. The gear and bearing faults are detected by checking the presence or monitoring the magnitude change of peaks in cyclic correlation and cyclic spectrum and are located according to the peak cyclic frequency locations or sideband frequency spacing.

  7. Research on the Comprehensive Demodulation of Gear Tooth Crack Early Fault

    Institute of Scientific and Technical Information of China (English)

    CUI Lingli; DING Fang; GAO Lixin; ZHANG Jianyu

    2006-01-01

    The component of gear vibration signal is very complex, when a localized tooth defect such as a tooth crack is present, the engagement of the cracked tooth will induce an impulsive change with comparatively low energy to the gear mesh signal and the background noise. This paper presents a new comprehensive demodulation method which combined with amplitude envelop demodulation and phase demodulation to extract gear crack early fault. A mathematical model of gear vibration signal contain crack fault is put forward. Simulation results based on this model show that the new comprehensive demodulation method is more effective in finding fault and judging fault level then conventional single amplitude demodulation at present.

  8. An underwater ship fault detection method based on Sonar image processing

    Science.gov (United States)

    Hong, Shi; Fang-jian, Shan; Bo, Cong; Wei, Qiu

    2016-02-01

    For the research of underwater ship fault detection method in conditions of sailing on the ocean especially in poor visibility muddy sea, a fault detection method under the assist of sonar image processing was proposed. Firstly, did sonar image denoising using the algorithm of pulse coupled neural network (PCNN); secondly, edge feature extraction for the image after denoising was carried out by morphological wavelet transform; Finally, interested regions Using relevant tracking method were taken, namely fault area mapping. The simulation results presented here proved the feasibility and effectiveness of the sonar image processing in underwater fault detection system.

  9. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  10. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-26

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  11. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Heaney, Michael; Jin, Xin; Robertson, Joseph; Cheung, Howard; Elmore, Ryan; Henze, Gregor

    2016-08-01

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energy models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.

  12. 国产磁珠结合自动化工作站批量提取生物检材DNA的应用%Application of a new magnetic beads made in China in DNA extraction of forensic biological samples with automation workstation

    Institute of Scientific and Technical Information of China (English)

    杨电; 刘宏; 刘超; 武清毓; 李越; 刘长晖; 马文丽

    2009-01-01

    Objective To study the application of a new magnetic beads made in China in DNA extraction of forensic biological samples with automation workstation.Methods DNA was extracted from common forensic biological samples by QIAGEN Bio-Robert Universal System and a new magnetic beads made in China,and then typed with Identifiler system in ABI3130XL Genetic Analyzer.210 of these samples were also quantitated by ABI7500 Real Time System.Results Total of 9100 genomic DNA was extracted from various forensic biological samples by the new magnetic beads made in China and automation workstation methods,and most of them were successfully typed for STR analysis.In these biological samples,oral swabs and muscles were of the highest Success rate of STR typing(100%),and the lowest was touched cell samples (50.0%).Conclusion The new magnetic beads made in China with automation workstation methods can be applied to DNA extraction of most forensic biological samples.%目的 建立国产磁珠结合自动化工作站批量提取案件中牛物检材DNA的方法.方法 采用国产磁珠结合Bio-Robert Universal System自动化工作站对案件中常见的生物样本进行DNA提取,检测Identifiler系统16个STR基因座,在ABI3130XL遗传分析仪上进行STR分型.其中210份样品同时在ABI7500型荧光定量PCR仪上进行定量.结果 9 100份10类生物检材应用国产磁珠结合自动化工作站,大部分可提取到足够的DNA进行STR检验.STR检验成功率最高的为口腔拭子、肌肉,达100%,接触细胞检材的成功率较低,为50.0%.结论 国产磁珠结合自动化工作站可用于案件中常见的大部分生物样本的DNA提取.

  13. USING MUTATION IN FAULT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    Chenglong Sun

    2016-05-01

    Full Text Available Fault localization is time-consuming and difficult, which makes it the bottleneck of the debugging progress. To help facilitate this task, there exist many fault localization techniques that help narrow down the region of the suspicious code in a program. Better accuracy in fault localization is achieved from heavy computation cost. Fault localization techniques that can effectively locate faults also manifest slow response rate. In this paper, we promote the use of pre-computing to distribute the time-intensive computations to the idle period of coding phase, in order to speed up such techniques and achieve both low-cost and high accuracy. We raise the research problems of finding suitable techniques that can be pre-computed and adapt it to the pre-computing paradigm in a continuous integration environment. Further, we use an existing fault localization technique to demonstrate our research exploration, and shows visions and challenges of the related methodologies.

  14. Fault Tolerant Wind Farm Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2013-01-01

    with best at a wind turbine control level. However, some faults are better dealt with at the wind farm control level, if the wind turbine is located in a wind farm. In this paper a benchmark model for fault detection and isolation, and fault tolerant control of wind turbines implemented at the wind farm...... scenarios. This benchmark model is used in an international competition dealing with Wind Farm fault detection and isolation and fault tolerant control....... control level is presented. The benchmark model includes a small wind farm of nine wind turbines, based on simple models of the wind turbines as well as the wind and interactions between wind turbines in the wind farm. The model includes wind and power references scenarios as well as three relevant fault...

  15. Fault Diagnosis for Rolling Bearing under Variable Conditions Based on Image Recognition

    Directory of Open Access Journals (Sweden)

    Bo Zhou

    2016-01-01

    Full Text Available Rolling bearing faults often lead to electromechanical system failure due to its high speed and complex working conditions. Recently, a large amount of fault diagnosis studies for rolling bearing based on vibration data has been reported. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper proposes a fault diagnosis method based on image recognition for rolling bearings to realize fault classification under variable working conditions. The proposed method includes the following steps. First, the vibration signal data are transformed into a two-dimensional image based on recurrence plot (RP technique. Next, a popular feature extraction method which has been widely used in the image field, scale invariant feature transform (SIFT, is employed to extract fault features from the two-dimensional RP and subsequently generate a 128-dimensional feature vector. Third, due to the redundancy of the high-dimensional feature, kernel principal component analysis is utilized to reduce the feature dimensionality. Finally, a neural network classifier trained by probabilistic neural network is used to perform fault diagnosis. Verification experiment results demonstrate the effectiveness of the proposed fault diagnosis method for rolling bearings under variable conditions, thereby providing a promising approach to fault diagnosis for rolling bearings.

  16. Time-frequency atoms-driven support vector machine method for bearings incipient fault diagnosis

    Science.gov (United States)

    Liu, Ruonan; Yang, Boyuan; Zhang, Xiaoli; Wang, Shibin; Chen, Xuefeng

    2016-06-01

    Bearing plays an essential role in the performance of mechanical system and fault diagnosis of mechanical system is inseparably related to the diagnosis of the bearings. However, it is a challenge to detect weak fault from the complex and non-stationary vibration signals with a large amount of noise, especially at the early stage. To improve the anti-noise ability and detect incipient fault, a novel fault detection method based on a short-time matching method and Support Vector Machine (SVM) is proposed. In this paper, the mechanism of roller bearing is discussed and the impact time frequency dictionary is constructed targeting the multi-component characteristics and fault feature of roller bearing fault vibration signals. Then, a short-time matching method is described and the simulation results show the excellent feature extraction effects in extremely low signal-to-noise ratio (SNR). After extracting the most relevance atoms as features, SVM was trained for fault recognition. Finally, the practical bearing experiments indicate that the proposed method is more effective and efficient than the traditional methods in weak impact signal oscillatory characters extraction and incipient fault diagnosis.

  17. A New Autom ated Fingerprint Identification System

    Institute of Scientific and Technical Information of China (English)

    沈学宁; 程民德; 等

    1989-01-01

    A new automated fingerpring identification system is proposed.In this system,based on some local properties of digital image,the shape and minutiae features of fingerprint can be extracted from the grey level image without binarizing and thinning.In query,a latent fingerprint can be matched with the filed fingerprints by shape and/or minutiae features.Matching by shape features is much faster than by minutiae.

  18. I-94 Automation FAQs

    Data.gov (United States)

    Department of Homeland Security — In order to increase efficiency, reduce operating costs and streamline the admissions process, U.S. Customs and Border Protection has automated Form I-94 at air and...

  19. Hydrometeorological Automated Data System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Office of Hydrologic Development of the National Weather Service operates HADS, the Hydrometeorological Automated Data System. This data set contains the last...

  20. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  1. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  2. Mine-hoist fault-condition detection based on the wavelet packet transform and kernel PCA

    Institute of Scientific and Technical Information of China (English)

    XIA Shi-xiong; NIU Qiang; ZHOU Yong; ZHANG Lei

    2008-01-01

    A new algorithm was developed to correctly identify fault conditions and accurately monitor fault development in a mine hoist. The new method is based on the Wavelet Packet Transform (WPT) and kernel PCA (Kernel Principal Component Analysis, KPCA). For non-linear monitoring systems the key to fault detection is the extracting of main features. The wavelet packet transform is a novel technique of signal processing that possesses excellent characteristics of time-frequency localization. It is suitable for analysing time-varying or transient signals. KPCA maps the original input features into a higher dimension feature space through a non-linear mapping. The principal components are then found in the higher dimension feature space. The KPCA transformation was applied to extracting the main nonlinear features from experimental fault feature data after wavelet packet transformation. The results show that the proposed method affords credible fault detection and identification.

  3. Dynamic Reconstruction-Based Fuzzy Neural Network Method for Fault Detection in Chaotic System

    Institute of Scientific and Technical Information of China (English)

    YANG Hongying; YE Hao; WANG Guizeng

    2008-01-01

    This paper presents a method for detecting weak fault signals in chaotic systems based on the chaotic dynamics reconstruction technique and the fuzzy neural system (FNS). The Grassberger-Procaccia algorithm and least squares regression were used to calculate the correlation dimension for the model order estimate. Based on the model order, an appropriately structured FNS model was designed to predict system faults. Through reasonable analysis of predicted errors, the disturbed signal can be extracted efficiently and correctly from the chaotic background. Satisfactory results were obtained by using several kinds of simula-tive faults which were extracted from the practical chaotic fault systems. Experimental results demonstra tethat the proposed approach has good prediction accuracy and can deal with data having a -40 dB signal to noise ratio (SNR). The low SNR requirement makes the approach a powerful tool for early fault detection.

  4. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  5. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    Science.gov (United States)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  6. FAULT DIAGNOSIS APPROACH FOR ROLLER BEARINGS BASED ON EMPIRICAL MODE DECOMPOSITION METHOD AND HILBERT TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Yu Dejie; Cheng Junsheng; Yang Yu

    2005-01-01

    Based upon empirical mode decomposition (EMD) method and Hilbert spectrum, a method for fault diagnosis of roller bearing is proposed. The orthogonal wavelet bases are used to translate vibration signals of a roller bearing into time-scale representation, then, an envelope signal can be obtained by envelope spectrum analysis of wavelet coefficients of high scales. By applying EMD method and Hilbert transform to the envelope signal, we can get the local Hilbert marginal spectrum from which the faults in a roller bearing can be diagnosed and fault patterns can be identified. Practical vibration signals measured from roller bearings with out-race faults or inner-race faults are analyzed by the proposed method. The results show that the proposed method is superior to the traditional envelope spectrum method in extracting the fault characteristics of roller bearings.

  7. Discrete wavelet transform-based fault diagnosis for driving system of pipeline detection robot arm

    Institute of Scientific and Technical Information of China (English)

    Deng Huiyu; Wang Xinli; Ma Peisun

    2005-01-01

    A real-time wavelet multi-resolution analysis (MRA)-based fault detection algorithm is proposed. The first stage detailed MRA signals extracted from the original signals were used as the criteria for fault detection. By measuring sharp variations in the detailed MRA signals, faults in the motor driving system of pipeline detection robot arm could be detected. The fault type was then identified by comparison of the three-phase MRA sharp variations. The effects of the faults were examined. The simulation results show that this algorithm is effective and robust, it is promising for fault detection in a robot's joint driving system. The method is simple, rapid and it can operate in real time.

  8. Causes of automotive turbocharger faults

    Directory of Open Access Journals (Sweden)

    Jan FILIPCZYK

    2013-01-01

    Full Text Available This paper presents the results of examinations of turbocharger damages. The analysis of the causes of faults in 100 engines with turbochargers of cars, buses and trucks has been carried out. The incidence and structure of turbocharged engine faults has been compared to the causes of faults of naturally aspirated engines. The cause of damage, the possibility of early detection, the time between overhaul and the impact on engine operation for each case of fault was carried out as well. The results of examinations allowed to determine the most common causes of damages and how to prevent them.

  9. An Overview of Transmission Line Protection by Artificial Neural Network: Fault Detection, Fault Classification, Fault Location, and Fault Direction Discrimination

    Directory of Open Access Journals (Sweden)

    Anamika Yadav

    2014-01-01

    Full Text Available Contemporary power systems are associated with serious issues of faults on high voltage transmission lines. Instant isolation of fault is necessary to maintain the system stability. Protective relay utilizes current and voltage signals to detect, classify, and locate the fault in transmission line. A trip signal will be sent by the relay to a circuit breaker with the purpose of disconnecting the faulted line from the rest of the system in case of a disturbance for maintaining the stability of the remaining healthy system. This paper focuses on the studies of fault detection, fault classification, fault location, fault phase selection, and fault direction discrimination by using artificial neural networks approach. Artificial neural networks are valuable for power system applications as they can be trained with offline data. Efforts have been made in this study to incorporate and review approximately all important techniques and philosophies of transmission line protection reported in the literature till June 2014. This comprehensive and exhaustive survey will reduce the difficulty of new researchers to evaluate different ANN based techniques with a set of references of all concerned contributions.

  10. Instant Sikuli test automation

    CERN Document Server

    Lau, Ben

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A concise guide written in an easy-to follow style using the Starter guide approach.This book is aimed at automation and testing professionals who want to use Sikuli to automate GUI. Some Python programming experience is assumed.

  11. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  12. Automated Lattice Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  13. Multiscale Permutation Entropy Based Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jinde Zheng

    2014-01-01

    Full Text Available A new rolling bearing fault diagnosis approach based on multiscale permutation entropy (MPE, Laplacian score (LS, and support vector machines (SVMs is proposed in this paper. Permutation entropy (PE was recently proposed and defined to measure the randomicity and detect dynamical changes of time series. However, for the complexity of mechanical systems, the randomicity and dynamic changes of the vibration signal will exist in different scales. Thus, the definition of MPE is introduced and employed to extract the nonlinear fault characteristics from the bearing vibration signal in different scales. Besides, the SVM is utilized to accomplish the fault feature classification to fulfill diagnostic procedure automatically. Meanwhile, in order to avoid a high dimension of features, the Laplacian score (LS is used to refine the feature vector by ranking the features according to their importance and correlations with the main fault information. Finally, the rolling bearing fault diagnosis method based on MPE, LS, and SVM is proposed and applied to the experimental data. The experimental data analysis results indicate that the proposed method could identify the fault categories effectively.

  14. Final Technical Report: PV Fault Detection Tool.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Christian Birk [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  15. Modeling and matching of landmarks for automation of Mars Rover localization

    Science.gov (United States)

    Wang, Jue

    The Mars Exploration Rover (MER) mission, begun in January 2004, has been extremely successful. However, decision-making for many operation tasks of the current MER mission and the 1997 Mars Pathfinder mission is performed on Earth through a predominantly manual, time-consuming process. Unmanned planetary rover navigation is ideally expected to reduce rover idle time, diminish the need for entering safe-mode, and dynamically handle opportunistic science events without required communication to Earth. Successful automation of rover navigation and localization during the extraterrestrial exploration requires that accurate position and attitude information can be received by a rover and that the rover has the support of simultaneous localization and mapping. An integrated approach with Bundle Adjustment (BA) and Visual Odometry (VO) can efficiently refine the rover position. However, during the MER mission, BA is done manually because of the difficulty in the automation of the cross-sitetie points selection. This dissertation proposes an automatic approach to select cross-site tie points from multiple rover sites based on the methods of landmark extraction, landmark modeling, and landmark matching. The first step in this approach is that important landmarks such as craters and rocks are defined. Methods of automatic feature extraction and landmark modeling are then introduced. Complex models with orientation angles and simple models without those angles are compared. The results have shown that simple models can provide reasonably good results. Next, the sensitivity of different modeling parameters is analyzed. Based on this analysis, cross-site rocks are matched through two complementary stages: rock distribution pattern matching and rock model matching. In addition, a preliminary experiment on orbital and ground landmark matching is also briefly introduced. Finally, the reliability of the cross-site tie points selection is validated by fault detection, which

  16. Thruster fault identification method for autonomous underwater vehicle using peak region energy and least square grey relational grade

    Directory of Open Access Journals (Sweden)

    Mingjun Zhang

    2015-12-01

    Full Text Available A novel thruster fault identification method for autonomous underwater vehicle is presented in this article. It uses the proposed peak region energy method to extract fault feature and uses the proposed least square grey relational grade method to estimate fault degree. The peak region energy method is developed from fusion feature modulus maximum method. It applies the fusion feature modulus maximum method to get fusion feature and then regards the maximum of peak region energy in the convolution operation results of fusion feature as fault feature. The least square grey relational grade method is developed from grey relational analysis algorithm. It determines the fault degree interval by the grey relational analysis algorithm and then estimates fault degree in the interval by least square algorithm. Pool experiments of the experimental prototype are conducted to verify the effectiveness of the proposed methods. The experimental results show that the fault feature extracted by the peak region energy method is monotonic to fault degree while the one extracted by the fusion feature modulus maximum method is not. The least square grey relational grade method can further get an estimation result between adjacent standard fault degrees while the estimation result of the grey relational analysis algorithm is just one of the standard fault degrees.

  17. Detection of fault structures with airborne LiDAR point-cloud data

    Science.gov (United States)

    Chen, Jie; Du, Lei

    2015-08-01

    The airborne LiDAR (Light Detection And Ranging) technology is a new type of aerial earth observation method which can be used to produce high-precision DEM (Digital Elevation Model) quickly and reflect ground surface information directly. Fault structure is one of the key forms of crustal movement, and its quantitative description is the key to the research of crustal movement. The airborne LiDAR point-cloud data is used to detect and extract fault structures automatically based on linear extension, elevation mutation and slope abnormal characteristics. Firstly, the LiDAR point-cloud data is processed to filter out buildings, vegetation and other non-surface information with the TIN (Triangulated Irregular Network) filtering method and Burman model calibration method. TIN and DEM are made from the processed data sequentially. Secondly, linear fault structures are extracted based on dual-threshold method. Finally, high-precision DOM (Digital Orthophoto Map) and other geological knowledge are used to check the accuracy of fault structure extraction. An experiment is carried out in Beiya Village of Yunnan Province, China. With LiDAR technology, results reveal that: the airborne LiDAR point-cloud data can be utilized to extract linear fault structures accurately and automatically, measure information such as height, width and slope of fault structures with high precision, and detect faults in areas with vegetation coverage effectively.

  18. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Mittempergher, Silvia; Vho, Alice; Bistacchi, Andrea

    2016-04-01

    them with respect to biotite. In higher resolution images this could be performed using circularity and size thresholds, however this could not be easily implemented in an automated procedure since the thresholds must be varied by the interpreter almost for each image. In 1 x 1 m images the resolution is generally too low to distinguish cataclasite and pseudotachylyte, so most of the time fault rocks were treated together. For this analysis we developed a fully automated workflow that, after applying noise correction, classification and skeletonization algorithms, returns labeled edge images of fault segments together with vector polylines associated to edge properties. Vector and edge properties represent a useful format to perform further quantitative analysis, for instance for classifying fault segments based on structural criteria, detect continuous fault traces, and detect the kind of termination of faults/fractures. This approach allows to collect statistically relevant datasets useful for further quantitative structural analysis.

  19. Assessing oral bioaccessibility of trace elements in soils under worst-case scenarios by automated in-line dynamic extraction as a front end to inductively coupled plasma atomic emission spectrometry.

    Science.gov (United States)

    Rosende, María; Magalhães, Luis M; Segundo, Marcela A; Miró, Manuel

    2014-09-01

    A novel biomimetic extraction procedure that allows for the in-line handing of ≥400 mg solid substrates is herein proposed for automatic ascertainment of trace element (TE) bioaccessibility in soils under worst-case conditions as per recommendations of ISO norms. A unified bioaccessibility/BARGE method (UBM)-like physiological-based extraction test is evaluated for the first time in a dynamic format for accurate assessment of in-vitro bioaccessibility of Cr, Cu, Ni, Pb and Zn in forest and residential-garden soils by on-line coupling of a hybrid flow set-up to inductively coupled plasma atomic emission spectrometry. Three biologically relevant operational extraction modes mimicking: (i) gastric juice extraction alone; (ii) saliva and gastric juice composite in unidirectional flow extraction format and (iii) saliva and gastric juice composite in a recirculation mode were thoroughly investigated. The extraction profiles of the three configurations using digestive fluids were proven to fit a first order reaction kinetic model for estimating the maximum TE bioaccessibility, that is, the actual worst-case scenario in human risk assessment protocols. A full factorial design, in which the sample amount (400-800 mg), the extractant flow rate (0.5-1.5 mL min(-1)) and the extraction temperature (27-37°C) were selected as variables for the multivariate optimization studies in order to obtain the maximum TE extractability. Two soils of varied physicochemical properties were analysed and no significant differences were found at the 0.05 significance level between the summation of leached concentrations of TE in gastric juice plus the residual fraction and the total concentration of the overall assayed metals determined by microwave digestion. These results showed the reliability and lack of bias (trueness) of the automatic biomimetic extraction approach using digestive juices.

  20. Fault deformation mechanisms and fault rocks in micritic limestones: Examples from Corinth rift normal faults

    Science.gov (United States)

    Bussolotto, M.; Benedicto, A.; Moen-Maurel, L.; Invernizzi, C.

    2015-08-01

    A multidisciplinary study investigates the influence of different parameters on fault rock architecture development along normal faults affecting non-porous carbonates of the Corinth rift southern margin. Here, some fault systems cut the same carbonate unit (Pindus), and the gradual and fast uplift since the initiation of the rift led to the exhumation of deep parts of the older faults. This exceptional context allows superficial active fault zones and old exhumed fault zones to be compared. Our approach includes field studies, micro-structural (optical microscope and cathodoluminescence), geochemical analyses (δ13C, δ18O, trace elements) and fluid inclusions microthermometry of calcite sin-kinematic cements. Our main results, in a depth-window ranging from 0 m to about 2500 m, are: i) all cements precipitated from meteoric fluids in a close or open circulation system depending on depth; ii) depth (in terms of P/T condition) determines the development of some structures and their sealing; iii) lithology (marly levels) influences the type of structures and its cohesive/non-cohesive nature; iv) early distributed rather than final total displacement along the main fault plane is the responsible for the fault zone architecture; v) petrophysical properties of each fault zone depend on the variable combination of these factors.

  1. Fault Tolerance in ZigBee Wireless Sensor Networks

    Science.gov (United States)

    Alena, Richard; Gilstrap, Ray; Baldwin, Jarren; Stone, Thom; Wilson, Pete

    2011-01-01

    Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 PRO Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. This technology is supported by System-on-a-Chip solutions, resulting in extremely small and low-power nodes. The Wireless Connections in Space Project addresses the aerospace flight domain for both flight-critical and non-critical avionics. WSNs provide the inherent fault tolerance required for aerospace applications utilizing such technology. The team from Ames Research Center has developed techniques for assessing the fault tolerance of ZigBee WSNs challenged by radio frequency (RF) interference or WSN node failure.

  2. Repeated extraction of DNA from FTA cards

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Ferrero, Laura; Børsting, Claus;

    2011-01-01

    Extraction of DNA using magnetic bead based techniques on automated DNA extraction instruments provides a fast, reliable and reproducible method for DNA extraction from various matrices. However, the yield of extracted DNA from FTA-cards is typically low. Here, we demonstrate that it is possible...... to repeatedly extract DNA from the processed FTA-disk. The method increases the yield from the nanogram range to the microgram range....

  3. Repeated extraction of DNA from FTA cards

    OpenAIRE

    Stangegaard, Michael; Ferrero, Laura; Børsting, Claus; Frank-Hansen, Rune; Hansen, Anders Johannes; Morling, Niels

    2011-01-01

    Extraction of DNA using magnetic bead based techniques on automated DNA extraction instruments provides a fast, reliable and reproducible method for DNA extraction from various matrices. However, the yield of extracted DNA from FTA-cards is typically low. Here, we demonstrate that it is possible to repeatedly extract DNA from the processed FTA-disk. The method increases the yield from the nanogram range to the microgram range.

  4. On-line early fault detection and diagnosis of municipal solid waste incinerators.

    Science.gov (United States)

    Zhao, Jinsong; Huang, Jianchao; Sun, Wei

    2008-11-01

    A fault detection and diagnosis framework is proposed in this paper for early fault detection and diagnosis (FDD) of municipal solid waste incinerators (MSWIs) in order to improve the safety and continuity of production. In this framework, principal component analysis (PCA), one of the multivariate statistical technologies, is used for detecting abnormal events, while rule-based reasoning performs the fault diagnosis and consequence prediction, and also generates recommendations for fault mitigation once an abnormal event is detected. A software package, SWIFT, is developed based on the proposed framework, and has been applied in an actual industrial MSWI. The application shows that automated real-time abnormal situation management (ASM) of the MSWI can be achieved by using SWIFT, resulting in an industrially acceptable low rate of wrong diagnosis, which has resulted in improved process continuity and environmental performance of the MSWI. PMID:18255276

  5. Wavelet neural network based fault diagnosis in nonlinear analog circuits

    Institute of Scientific and Technical Information of China (English)

    Yin Shirong; Chen Guangju; Xie Yongle

    2006-01-01

    The theories of diagnosing nonlinear analog circuits by means of the transient response testing are studied. Wavelet analysis is made to extract the transient response signature of nonlinear circuits and compress the signature dada. The best wavelet function is selected based on the between-category total scatter of signature. The fault dictionary of nonlinear circuits is constructed based on improved back-propagation(BP) neural network. Experimental results demonstrate that the method proposed has high diagnostic sensitivity and fast fault identification and deducibility.

  6. Application of Rough Set Theory in Fault Diagnostic Rules Acquisition

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. But original rough sets theory only generates deterministic rules and deals with data sets in which there is no noise. The variable precision rough set model (VPRSM) is presented to handle uncertain and noisy information. A method based on VPRSM is proposed to apply to fault diagnosis feature extraction and rules acquisition for industrial applications. An example for fault diagnosis of rotary machinery is given to show that the method is very effective.

  7. Automated forward mechanical modeling of wrinkle ridges on Mars

    Science.gov (United States)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B

  8. A semi-automated approach to building text summarisation classifiers

    Directory of Open Access Journals (Sweden)

    Matias Garcia-Constantino

    2012-12-01

    Full Text Available An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique, is described. The summarisation technique utilises the concept of classification but with the support of domain/human experts during classifier construction. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool, is presented and evaluated using real questionnaire data. The results of this evaluation are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

  9. A Fault Diagnosis Method for Rotating Machinery Based on PCA and Morlet Kernel SVM

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available A novel method to solve the rotating machinery fault diagnosis problem is proposed, which is based on principal components analysis (PCA to extract the characteristic features and the Morlet kernel support vector machine (MSVM to achieve the fault classification. Firstly, the gathered vibration signals were decomposed by the empirical mode decomposition (EMD to obtain the corresponding intrinsic mode function (IMF. The EMD energy entropy that includes dominant fault information is defined as the characteristic features. However, the extracted features remained high-dimensional, and excessive redundant information still existed. So, the PCA is introduced to extract the characteristic features and reduce the dimension. The characteristic features are input into the MSVM to train and construct the running state identification model; the rotating machinery running state identification is realized. The running states of a bearing normal inner race and several inner races with different degree of fault were recognized; the results validate the effectiveness of the proposed algorithm.

  10. Hard Fault Analysis of Trivium

    CERN Document Server

    Yupu, Hu; Yiwei, Zhang

    2009-01-01

    Fault analysis is a powerful attack to stream ciphers. Up to now, the major idea of fault analysis is to simplify the cipher system by injecting some soft faults. We call it soft fault analysis. As a hardware-oriented stream cipher, Trivium is weak under soft fault analysis. In this paper we consider another type of fault analysis of stream cipher, which is to simplify the cipher system by injecting some hard faults. We call it hard fault analysis. We present the following results about such attack to Trivium. In Case 1 with the probability not smaller than 0.2396, the attacker can obtain 69 bits of 80-bits-key. In Case 2 with the probability not smaller than 0.2291, the attacker can obtain all of 80-bits-key. In Case 3 with the probability not smaller than 0.2291, the attacker can partially solve the key. In Case 4 with non-neglectable probability, the attacker can obtain a simplified cipher, with smaller number of state bits and slower non-linearization procedure. In Case 5 with non-neglectable probability,...

  11. Detection and diagnosis of bearing faults using shift-invariant dictionary learning and hidden Markov model

    Science.gov (United States)

    Zhou, Haitao; Chen, Jin; Dong, Guangming; Wang, Ran

    2016-05-01

    Many existing signal processing methods usually select a predefined basis function in advance. This basis functions selection relies on a priori knowledge about the target signal, which is always infeasible in engineering applications. Dictionary learning method provides an ambitious direction to learn basis atoms from data itself with the objective of finding the underlying structure embedded in signal. As a special case of dictionary learning methods, shift-invariant dictionary learning (SIDL) reconstructs an input signal using basis atoms in all possible time shifts. The property of shift-invariance is very suitable to extract periodic impulses, which are typical symptom of mechanical fault signal. After learning basis atoms, a signal can be decomposed into a collection of latent components, each is reconstructed by one basis atom and its corresponding time-shifts. In this paper, SIDL method is introduced as an adaptive feature extraction technique. Then an effective approach based on SIDL and hidden Markov model (HMM) is addressed for machinery fault diagnosis. The SIDL-based feature extraction is applied to analyze both simulated and experiment signal with specific notch size. This experiment shows that SIDL can successfully extract double impulses in bearing signal. The second experiment presents an artificial fault experiment with different bearing fault type. Feature extraction based on SIDL method is performed on each signal, and then HMM is used to identify its fault type. This experiment results show that the proposed SIDL-HMM has a good performance in bearing fault diagnosis.

  12. Rotor broken-bar fault diagnosis of induction motor based on HHT of the startup electromagnetic torque

    Institute of Scientific and Technical Information of China (English)

    NIU Fa-liang; HUANG Jin; YANG Jia-qiang; CHEN Li-yuan; JIN Hai

    2006-01-01

    This paper presents a new method for rotor broken-bar fault diagnosis of induction motors.The asymmetry of the rotor caused by broken-bar fault will give rise to the appearance of additional frequency component of 2sfs (s is slip and fs is supply frequency) in the electromagnetic torque spectrum.The startup electromagnetic torque signal is decomposed into several intrinsic mode function (IMF) with empirical mode decomposition (EMD)based on the Hilbert-Huang Transform.Then,using the instantaneous frequency extraction principle of the Hilbert Transform, the rotor broken-bar fault characteristic frequency of 2sfs can be exactly extracted from the IMF component,which includes the rotor fault information.Moreover,the magnitude of the IMF which includes the rotor fault information can also give the number of rotor broken bars.Experimental results demonstrate that the proposed electromagnetic torque-based fault diagnosis method is feasible.

  13. Differential Fault Analysis of Rabbit

    Science.gov (United States)

    Kircanski, Aleksandar; Youssef, Amr M.

    Rabbit is a high speed scalable stream cipher with 128-bit key and a 64-bit initialization vector. It has passed all three stages of the ECRYPT stream cipher project and is a member of eSTREAM software portfolio. In this paper, we present a practical fault analysis attack on Rabbit. The fault model in which we analyze the cipher is the one in which the attacker is assumed to be able to fault a random bit of the internal state of the cipher but cannot control the exact location of injected faults. Our attack requires around 128 - 256 faults, precomputed table of size 241.6 bytes and recovers the complete internal state of Rabbit in about 238 steps.

  14. Restructuring of workflows to minimise errors via stochastic model checking: An automated evolutionary approach

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee

    2016-01-01

    This article presents a framework for the automated restructuring of stochastic workflows to reduce the impact of faults. The framework allows for the modelling of workflows by means of a formalised subset of the BPMN workflow language. We extend this modelling formalism to describe faults...... to model resources, associated with a workflow. The approach is fully automated and only the modelling of the production workflows, potential faults and the expression of the goals require manual input. We present the design of a software tool implementing this framework and explore the practical utility...... of this approach through an industrial case study in which the risk of production failures and their impact are reduced by restructuring the workflow....

  15. Fault tolerant control for switched linear systems

    CERN Document Server

    Du, Dongsheng; Shi, Peng

    2015-01-01

    This book presents up-to-date research and novel methodologies on fault diagnosis and fault tolerant control for switched linear systems. It provides a unified yet neat framework of filtering, fault detection, fault diagnosis and fault tolerant control of switched systems. It can therefore serve as a useful textbook for senior and/or graduate students who are interested in knowing the state-of-the-art of filtering, fault detection, fault diagnosis and fault tolerant control areas, as well as recent advances in switched linear systems.  

  16. Design Approach for Fault Recoverable ALU with Improved Fault Tolerance

    Directory of Open Access Journals (Sweden)

    Ankit K V

    2015-08-01

    Full Text Available A new design for fault tolerant and fault recoverable ALU System has been proposed in this paper. Reliability is one of the most critical factors that have to be considered during the designing phase of any IC. In critical applications like Medical equipment & Military applications this reliability factor plays a very critical role in determining the acceptance of product. Insertion of special modules in the main design for reliability enhancement will give considerable amount of area & power penalty. So, a novel approach to this problem is to find ways for reusing the already available components in digital system in efficient way to implement recoverable methodologies. Triple Modular Redundancy (TMR has traditionally used for protecting digital logic from the SEUs (single event upset by triplicating the critical components of the system to give fault tolerance to system. ScTMR- Scan chain-based error recovery TMR technique provides recovery for all internal faults. ScTMR uses a roll-forward approach and employs the scan chain implemented in the circuits for testability purposes to recover the system to fault-free state. The proposed design will incorporate a ScTMR controller over TMR system of ALU and will make the system fault tolerant and fault recoverable. Hence, proposed design will be more efficient & reliable to use in critical applications, than any other design present till today.

  17. Quantification of 31 illicit and medicinal drugs and metabolites in whole blood by fully automated solid-phase extraction and ultra-performance liquid chromatography-tandem mass spectrometry

    DEFF Research Database (Denmark)

    Bjørk, Marie Kjærgaard; Simonsen, Kirsten Wiese; Andersen, David Wederkinck;

    2013-01-01

    -phase extraction was performed using Strata X-C plates. Extraction time for 96 samples was less than 3 h. Chromatography was performed using an ACQUITY UPLC system (Waters Corporation, Milford, USA). Analytes were separated on a 100 mm × 2.1 mm, 1.7 μm Acquity UPLC CSH C18 column using a 6.5 min 0.1 % ammonia (25...

  18. Materials Testing and Automation

    Science.gov (United States)

    Cooper, Wayne D.; Zweigoron, Ronald B.

    1980-07-01

    The advent of automation in materials testing has been in large part responsible for recent radical changes in the materials testing field: Tests virtually impossible to perform without a computer have become more straightforward to conduct. In addition, standardized tests may be performed with enhanced efficiency and repeatability. A typical automated system is described in terms of its primary subsystems — an analog station, a digital computer, and a processor interface. The processor interface links the analog functions with the digital computer; it includes data acquisition, command function generation, and test control functions. Features of automated testing are described with emphasis on calculated variable control, control of a variable that is computed by the processor and cannot be read directly from a transducer. Three calculated variable tests are described: a yield surface probe test, a thermomechanical fatigue test, and a constant-stress-intensity range crack-growth test. Future developments are discussed.

  19. Fast EEMD Based AM-Correntropy Matrix and Its Application on Roller Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yunxiao Fu

    2016-06-01

    Full Text Available Roller bearing plays a significant role in industrial sectors. To improve the ability of roller bearing fault diagnosis under multi-rotating situation, this paper proposes a novel roller bearing fault characteristic: the Amplitude Modulation (AM based correntropy extracted from the Intrinsic Mode Functions (IMFs, which are decomposed by Fast Ensemble Empirical mode decomposition (FEEMD and employ Least Square Support Vector Machine (LSSVM to implement intelligent fault identification. Firstly, the roller bearing vibration acceleration signal is decomposed by FEEMD to extract IMFs. Secondly, IMF correntropy matrix (IMFCM as the fault feature matrix is calculated from the AM-correntropy model of the primary vibration signal and IMFs. Furthermore, depending on LSSVM, the fault identification results of the roller bearing are obtained. Through the bearing identification experiments in stationary rotating conditions, it was verified that IMFCM generates more stable and higher diagnosis accuracy than conventional fault features such as energy moment, fuzzy entropy, and spectral kurtosis. Additionally, it proves that IMFCM has more diagnosis robustness than conventional fault features under cross-mixed roller bearing operating conditions. The diagnosis accuracy was more than 84% for the cross-mixed operating condition, which is much higher than the traditional features. In conclusion, it was proven that FEEMD-IMFCM-LSSVM is a reliable technology for roller bearing fault diagnosis under the constant or multi-positioned operating conditions, and as such, it possesses potential prospects for a broad application of uses.

  20. Fleet-Wide Prognostic and Health Management Suite: Asset Fault Signature Database

    Energy Technology Data Exchange (ETDEWEB)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford; Richard Rusaw

    2015-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: (1) Diagnostic Advisor, (2) Asset Fault Signature (AFS) Database, (3) Remaining Useful Life Advisor, and (4) Remaining Useful Life Database. The paper focuses on the AFS Database of the FW-PHM Suite, which is used to catalog asset fault signatures. A fault signature is a structured representation of the information that an expert would use to first detect and then verify the occurrence of a specific type of fault. The fault signatures developed to assess the health status of generator step-up transformers are described in the paper. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.

  1. Nonlinear Observer for Signal and Parameter Fault Detection in Ship Propulsion Control, In "New Directions in Nonlinear Observer Design" (H. Nijmeijer and T. I. Fossen, Eds.), pp. 375-397

    DEFF Research Database (Denmark)

    Blanke, M.; Izadi-Zamanabadi, Roozbeh

    Faults in ship propulsion and their associated automation systems can cause dramatic reduction on ships' ability to propel and maneuver, and effective means are needed to prevent that faults develop into failure. The chapter analyses the control system for a propulsion plant on a ferry. It is shown...

  2. Envelope order tracking for fault detection in rolling element bearings

    Science.gov (United States)

    Guo, Yu; Liu, Ting-Wei; Na, Jing; Fung, Rong-Fong

    2012-12-01

    An envelope order tracking analysis scheme is proposed in the paper for the fault detection of rolling element bearing (REB) under varying-speed running condition. The developed method takes the advantages of order tracking, envelope analysis and spectral kurtosis. The fast kurtogram algorithm is utilized to obtain both optimal center frequency and bandwidth of the band-pass filter based on the maximum spectral kurtosis. The envelope containing vibration features of the incipient REB fault can be extracted adaptively. The envelope is re-sampled by the even-angle sampling scheme, and thus the non-stationary signal in the time domain is represented as a quasi-stationary signal in the angular domain. As a result, the frequency-smear problem can be eliminated in order spectrum and the fault diagnosis of REB in the varying-speed running condition of the rotating machinery is achieved. Experiments are conducted to verify the validity of the proposed method.

  3. Automating the CMS DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  4. Automated phantom assay system

    International Nuclear Information System (INIS)

    This paper describes an automated phantom assay system developed for assaying phantoms spiked with minute quantities of radionuclides. The system includes a computer-controlled linear-translation table that positions the phantom at exact distances from a spectrometer. A multichannel analyzer (MCA) interfaces with a computer to collect gamma spectral data. Signals transmitted between the controller and MCA synchronize data collection and phantom positioning. Measured data are then stored on disk for subsequent analysis. The automated system allows continuous unattended operation and ensures reproducible results

  5. Automated gas chromatography

    Science.gov (United States)

    Mowry, Curtis D.; Blair, Dianna S.; Rodacy, Philip J.; Reber, Stephen D.

    1999-01-01

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute.

  6. Sparse representation based latent components analysis for machinery weak fault detection

    Science.gov (United States)

    Tang, Haifeng; Chen, Jin; Dong, Guangming

    2014-06-01

    Weak machinery fault detection is a difficult task because of two main reasons (1) At the early stage of fault development, signature of fault related component performs incompletely and is quite different from that at the apparent failure stage. In most instances, it seems almost identical with the normal operating state. (2) The fault feature is always submerged and distorted by relatively strong background noise and macro-structural vibrations even if the fault component already performs completely, especially when the structure of fault components and interference are close. To solve these problems, we should penetrate into the underlying structure of the signal. Sparse representation provides a class of algorithms for finding succinct representations of signal that capture higher-level features in the data. With the purpose of extracting incomplete or seriously overwhelmed fault components, a sparse representation based latent components decomposition method is proposed in this paper. As a special case of sparse representation, shift-invariant sparse coding algorithm provides an effective basis functions learning scheme for capturing the underlying structure of machinery fault signal by iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. Among these basis functions, fault feature can be probably contained and extracted if optimal latent component is filtered. The proposed scheme is applied to analyze vibration signals of both rolling bearings and gears. Experiment of accelerated lifetime test of bearings validates the proposed method's ability of detecting early fault. Besides, experiments of fault bearings and gears with heavy noise and interference show the approach can effectively distinguish subtle differences between defect and interference. All the experimental data are analyzed by wavelet shrinkage and basis pursuit de-noising (BPDN) method for comparison.

  7. Absolute age determination of quaternary faults

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik [Korea Basic Science Institute, Seoul (Korea, Republic of)] (and others)

    2000-03-15

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results.

  8. Subaru FATS (fault tracking system)

    Science.gov (United States)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  9. Tools for automating spacecraft ground systems: The Intelligent Command and Control (ICC) approach

    Science.gov (United States)

    Stoffel, A. William; Mclean, David

    1996-01-01

    The practical application of scripting languages and World Wide Web tools to the support of spacecraft ground system automation, is reported on. The mission activities and the automation tools used at the Goddard Space Flight Center (MD) are reviewed. The use of the Tool Command Language (TCL) and the Practical Extraction and Report Language (PERL) scripting tools for automating mission operations is discussed together with the application of different tools for the Compton Gamma Ray Observatory ground system.

  10. Research of singular value decomposition based on slip matrix for rolling bearing fault diagnosis

    Science.gov (United States)

    Cong, Feiyun; Zhong, Wei; Tong, Shuiguang; Tang, Ning; Chen, Jin

    2015-05-01

    Rolling element bearings are at the heart of most rotating machines and they bear the function of connectivity between the rotor and stator. It is important to distinguish the incipient fault before the bearing step into serious failure. The Slip Matrix (SM) construction method based on Singular Value Decomposition (SVD) is proposed in this paper. The SM based fault feature extraction and impulses intelligent detection methods are introduced as the key steps for rolling bearing fault diagnosis. The numerical simulation of rolling bearing fault signal is adopted which shows that the proposed method is good at fault impulses detection in strong background noise environment. The rolling element bearing accelerated life test is performed for the acquisition of experimental data of rolling bearing fault. With the rolling bearing running from normal state to failure, the initial fault signal part can be picked out from the whole life vibration data of the rolling bearing. The vibration signal is close to the nature fault signal which is acquired from a rolling bearing applied in industrial field. The analysis result shows that the proposed method has an excellent performance in rolling bearing fault detection.

  11. Reduction of Faults in Software Testing by Fault Domination"

    Institute of Scientific and Technical Information of China (English)

    XU Shiyi

    2007-01-01

    Although mutation testing is one of the practical ways of enhancing test effectiveness in software testing, it could be sometimes infeasible in practical work for a large scale software so that the mutation testing becomes time-consuming and even in prohibited time. Therefore, the number of faults assumed to exist in the software under test should be reduced so as to be able to confine the time complexity of test within a reasonable period of time. This paper utilizes the concept of fault dominance and equivalence, which has long been employed in hardware testing, for revealing a novel way of reducing the number of faults assumed to hide in software systems. Once the number of faults assumed in software is decreased sharply, the effectiveness of mutation testing will be greatly enhanced and become a feasible way of software testing. Examples and experimental results are presented to illustrate the effectiveness and the helpfulness of the technology proposed in the paper.

  12. Deep Fault Drilling Project—Alpine Fault, New Zealand

    Directory of Open Access Journals (Sweden)

    Rupert Sutherland

    2009-09-01

    Full Text Available The Alpine Fault, South Island, New Zealand, constitutes a globally significant natural laboratory for research into how active plate-bounding continental faults work and, in particular, how rocks exposed at the surface today relate to deep-seated processes of tectonic deformation, seismogenesis, and mineralization. The along-strike homogeneity of the hanging wall, rapid rate of dextral-reverse slip on an inclined fault plane, and relatively shallow depths to mechanical and chemical transitions make the Alpine Fault and the broader South Island plate boundary an important international site for multi-disciplinary research and a realistic target for an ambitious long-term program of scientific drilling investigations.

  13. Fault Monitoring and Fault Recovery Control for Position Moored Tanker

    DEFF Research Database (Denmark)

    Fang, Shaoji; Blanke, Mogens

    2011-01-01

    This paper addresses fault tolerant control for position mooring of a shuttle tanker operating in the North Sea. A complete framework for fault diagnosis is presented but the loss of a sub-sea mooring line buoyancy element is given particular attention, since this fault could lead to mooring line...... breakage and a high-risk abortion of an oil-loading operation. With significant drift forces from waves, non-Gaussian elements dominate forces and the residuals designed for fault diagnosis. Hypothesis testing need be designed using dedicated change detection for the type of distribution encountered....... In addition to dedicated diagnosis, an optimal position algorithm is proposed to accommodate buoyancy element failure and keep the mooring system in a safe state. Furthermore, even in the case of line breakage, this optimal position strategy could be utilised to avoid breakage of a second mooring line...

  14. An Algebra of Fault Tolerance

    CERN Document Server

    Rao, Shrisha

    2009-01-01

    Every system of any significant size is created by composition from smaller sub-systems or components. It is thus fruitful to analyze the fault-tolerance of a system as a function of its composition. In this paper, two basic types of system composition are described, and an algebra to describe fault tolerance of composed systems is derived. The set of systems forms monoids under the two composition operators, and a semiring when both are concerned. A partial ordering relation between systems is used to compare their fault-tolerance behaviors.

  15. Drilling Automation Demonstrations in Subsurface Exploration for Astrobiology

    Science.gov (United States)

    Glass, Brian; Cannon, H.; Lee, P.; Hanagud, S.; Davis, K.

    2006-01-01

    This project proposes to study subsurface permafrost microbial habitats at a relevant Arctic Mars-analog site (Haughton Crater, Devon Island, Canada) while developing and maturing the subsurface drilling and drilling automation technologies that will be required by post-2010 missions. It builds on earlier drilling technology projects to add permafrost and ice-drilling capabilities to 5m with a lightweight drill that will be automatically monitored and controlled in-situ. Frozen cores obtained with this drill under sterilized protocols will be used in testing three hypotheses pertaining to near-surface physical geology and ground H2O ice distribution, viewed as a habitat for microbial life in subsurface ice and ice-consolidated sediments. Automation technologies employed will demonstrate hands-off diagnostics and drill control, using novel vibrational dynamical analysis methods and model-based reasoning to monitor and identify drilling fault states before and during faults. Three field deployments, to a Mars-analog site with frozen impact crater fallback breccia, will support science goals, provide a rigorous test of drilling automation and lightweight permafrost drilling, and leverage past experience with the field site s particular logistics.

  16. Novel methods for earth fault management in medium voltage distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Nikander, A.; Jaerventausta, P. [Tampere Univ. of Technology (Finland)

    1998-08-01

    Customers have become less and less tolerable against even short interruptions of supply. Rapid autoreclosures are especially harmful for those commercial and private customers who have equipment which will be disturbed by these under half second interruptions. Mainly due to increasing use of distribution automation (eg. remote controlled switching devices, fault detectors, computational fault location) the average interruption period per customer has been reduced. Simultaneously the amount of equipment sensitive to short voltage break or dip has increased. Therefore reducing the number of the interruptions has become a more essential target

  17. Searching for the Blind fault: Haiti Subsurface Imaging Project

    Science.gov (United States)

    Kocel, E.; Stewart, R.; Mann, P.; Dowla, N.

    2013-12-01

    The impact of the 12 January 2010 Haiti earthquake was catastrophic, causing serious damage to infrastructure and more than 200000 deaths. Initially, the Haiti earthquake was assumed to occur with the movement of Enriquillo Plantain Garden Fault Zone (EPGFZ), but recent scientific studies have shown that the primary rupture occurred on an unmapped blind thrust fault in the Léogâne fan (associated as Léogâne fault) near the EPGFZ (Figure 1a and 1b). The main purpose of this project are: characterizing and analyzing subsurface structures and associated hazards, characterizing the physical properties of near-surface, locating and understanding the blind faults theorized to have caused the 2010 earthquake (Léogâne fault). Surveys were conducted by a research group from the University of Houston in 2013 to address some of these goals. Surveys were mainly concentrated on Léogâne fan (Figure 1c) and Lake Enriquillo (Figure 1d). For Léogâne surveys, multiple 2D Seismic lines were deployed with approximately N-S orientation. We performed both P wave and S wave refraction analyses and time-migrated the P wave data. The prominent change in both P wave and S wave velocities are interpreted as the effects of faulting. The CMP stacked section shows a multiple discontinuity profile whose location coincides with the anomalies observed at P wave and S wave refraction velocity profile. Extracted reflection coefficients also support a reflective structure at these offsets. We interpret the anomalous structure as North dipping thrust fault. The dip of the fault is estimated around 60°. Near-surface reflection seismic analysis provided deeper information indicating multiple layers with varying velocities, intersected by a number of faults. Gravity surveys were conducted along the main seismic line over Léogâne fan, with additional surveys conducted from Jacmel to Léogâne and around the Port Au Prince area. The estimated Free air gravity profile suggests that the

  18. Rolling Bearing Fault Detection Based on the Teager Energy Operator and Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Hongmei Liu

    2013-01-01

    Full Text Available This paper presents an approach to bearing fault diagnosis based on the Teager energy operator (TEO and Elman neural network. The TEO can estimate the total mechanical energy required to generate signals, thereby resulting in good time resolution and self-adaptability to transient signals. These attributes reflect the advantage of detecting signal impact characteristics. To detect the impact characteristics of the vibration signals of bearing faults, we used the TEO to extract the cyclical impact caused by bearing failure and applied the wavelet packet to reduce the noise of the Teager energy signal. This approach also enabled the extraction of bearing fault feature frequencies, which were identified using the fast Fourier transform of Teager energy. The feature frequencies of the inner and outer faults, as well as the ratio of resonance frequency band energy to total energy in the Teager spectrum, were extracted as feature vectors. In order to avoid a frequency leak error, the weighted Teager spectrum around the fault frequency was extracted as feature vector. These vectors were then used to train the Elman neural network and improve the robustness of the diagnostic algorithm. Experimental results indicate that the proposed approach effectively detects bearing faults under variable conditions.

  19. Hybrid Support Vector Machines-Based Multi-fault Classification

    Institute of Scientific and Technical Information of China (English)

    GAO Guo-hua; ZHANG Yong-zhong; ZHU Yu; DUAN Guang-huang

    2007-01-01

    Support Vector Machines (SVM) is a new general machine-learning tool based on structural risk minimization principle. This characteristic is very signific ant for the fault diagnostics when the number of fault samples is limited. Considering that SVM theory is originally designed for a two-class classification, a hybrid SVM scheme is proposed for multi-fault classification of rotating machinery in our paper. Two SVM strategies, 1-v-1 (one versus one) and 1-v-r (one versus rest), are respectively adopted at different classification levels. At the parallel classification level, using 1-v-1 strategy, the fault features extracted by various signal analysis methods are transferred into the multiple parallel SVM and the local classification results are obtained. At the serial classification level, these local results values are fused by one serial SVM based on 1-v-r strategy. The hybrid SVM scheme introduced in our paper not only generalizes the performance of signal binary SVMs but improves the precision and reliability of the fault classification results. The actually testing results show the availability suitability of this new method.

  20. Towards Automation 2.0: A Neurocognitive Model for Environment Recognition, Decision-Making, and Action Execution

    OpenAIRE

    Zucker Gerhard; Dietrich Dietmar; Velik Rosemarie

    2011-01-01

    The ongoing penetration of building automation by information technology is by far not saturated. Today's systems need not only be reliable and fault tolerant, they also have to regard energy efficiency and flexibility in the overall consumption. Meeting the quality and comfort goals in building automation while at the same time optimizing towards energy, carbon footprint and cost-efficiency requires systems that are able to handle large amounts of information and negotiate system behaviour ...

  1. A Multi-level Approach for Complex Fault Isolation Based on Structured Residuals%复杂故障判定的基于结构化残差的多层次分析法

    Institute of Scientific and Technical Information of China (English)

    叶鲁彬; 石向荣; 梁军

    2011-01-01

    In industrial processes, there exist faults that have complex effect on process variables. Complex and simple faults are defined according to their effect dimensions. The conventional approaches based on structured residuals cannot isolate complex faults. This paper presents a multi-level strategy for complex fault isolation. An extraction procedure is employed to reduce the complex faults to simple ones and assign them to several levels. On each level, faults are isolated by their different responses in the structured residuals. Each residual is obtained insensitive to one fault but more sensitive to others. The faults on different levels are verified to have different residual responses and will not be confused. An entire incidence matrix containing residual response characteristics of all faults is obtained, based on which faults can be isolated. The proposed method is applied in the Tennessee Eastman process example, and the effectiveness and advantage are demonstrated.

  2. Protokoller til Home Automation

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    computer, der kan skifte mellem foruddefinerede indstillinger. Nogle gange kan computeren fjernstyres over internettet, så man kan se hjemmets status fra en computer eller måske endda fra en mobiltelefon. Mens nævnte anvendelser er klassiske indenfor home automation, er yderligere funktionalitet dukket op...

  3. ELECTROPNEUMATIC AUTOMATION EDUCATIONAL LABORATORY

    OpenAIRE

    Dolgorukov, S. O.; National Aviation University; Roman, B. V.; National Aviation University

    2013-01-01

    The article reflects current situation in education regarding mechatronics learning difficulties. Com-plex of laboratory test benches on electropneumatic automation are considered as a tool in advancing through technical science. Course of laboratory works developed to meet the requirement of efficient and reliable way of practical skills acquisition is regarded the simplest way for students to learn the ba-sics of mechatronics.

  4. Building Automation Systems.

    Science.gov (United States)

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  5. Test Construction: Automated

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2014-01-01

    Optimal test construction deals with automated assembly of tests for educational and psychological measurement. Items are selected from an item bank to meet a predefined set of test specifications. Several models for optimal test construction are presented, and two algorithms for optimal test assemb

  6. Test Construction: Automated

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2016-01-01

    Optimal test construction deals with automated assembly of tests for educational and psychological measurement. Items are selected from an item bank to meet a predefined set of test specifications. Several models for optimal test construction are presented, and two algorithms for optimal test assemb

  7. Automated Web Applications Testing

    Directory of Open Access Journals (Sweden)

    Alexandru Dan CĂPRIŢĂ

    2009-01-01

    Full Text Available Unit tests are a vital part of several software development practicesand processes such as Test-First Programming, Extreme Programming andTest-Driven Development. This article shortly presents the software quality andtesting concepts as well as an introduction to an automated unit testingframework for PHP web based applications.

  8. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  9. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2015-01-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  10. Fault isolatability conditions for linear systems

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Henrik

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  11. Automating spectral measurements

    Science.gov (United States)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  12. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  13. Statistical Learning in Automated Troubleshooting: Application to LTE Interference Mitigation

    CERN Document Server

    Tiwana, Moazzam Islam; Altman, Zwi

    2010-01-01

    This paper presents a method for automated healing as part of off-line automated troubleshooting. The method combines statistical learning with constraint optimization. The automated healing aims at locally optimizing radio resource management (RRM) or system parameters of cells with poor performance in an iterative manner. The statistical learning processes the data using Logistic Regression (LR) to extract closed form (functional) relations between Key Performance Indicators (KPIs) and Radio Resource Management (RRM) parameters. These functional relations are then processed by an optimization engine which proposes new parameter values. The advantage of the proposed formulation is the small number of iterations required by the automated healing method to converge, making it suitable for off-line implementation. The proposed method is applied to heal an Inter-Cell Interference Coordination (ICIC) process in a 3G Long Term Evolution (LTE) network which is based on soft-frequency reuse scheme. Numerical simulat...

  14. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  15. The fault-tree compiler

    Science.gov (United States)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  16. Fault Testing for Reversible Circuits

    CERN Document Server

    Patel, K N; Markov, I L; Patel, Ketan N.; Hayes, John P.; Markov, Igor L.

    2004-01-01

    Applications of reversible circuits can be found in the fields of low-power computation, cryptography, communications, digital signal processing, and the emerging field of quantum computation. Furthermore, prototype circuits for low-power applications are already being fabricated in CMOS. Regardless of the eventual technology adopted, testing is sure to be an important component in any robust implementation. We consider the test set generation problem. Reversibility affects the testing problem in fundamental ways, making it significantly simpler than for the irreversible case. For example, we show that any test set that detects all single stuck-at faults in a reversible circuit also detects all multiple stuck-at faults. We present efficient test set constructions for the standard stuck-at fault model as well as the usually intractable cell-fault model. We also give a practical test set generation algorithm, based on an integer linear programming formulation, that yields test sets approximately half the size o...

  17. On-line dynamic extraction and automated determination of readily bioavailable hexavalent chromium in solid substrates using micro-sequential injection bead-injection lab-on-valve hyphenated with electrothermal atomic absorption spectrometry

    DEFF Research Database (Denmark)

    Long, Xiangbao; Miró, Manuel; Hansen, Elo Harald

    2006-01-01

    A novel and miniaturized micro-sequential injection bead injection lab-on-valve (μSI-BI-LOV) fractionation system was developed for in-line microcolumn soil extraction under simulated environmental scenarios and accurate monitoring of the content of easily mobilisable hexavalent chromium in soil ...

  18. A Method to Estimate Friction Coefficient from Orientation Distribution of Meso-scale Faults: Applications to Faults in Forearc Sediment and Underplated Tectonic Mélange

    Science.gov (United States)

    Sato, K.

    2015-12-01

    Friction coefficients along faults control the brittle strength of the earth's upper crust, although it is difficult to estimate them especially of ancient geological faults. Several previous studies tried to determine the friction coefficient of meso-scale faults from their orientation distribution as follows. Fault-slip analysis through stress tensor inversion techniques gives principal stress axes and a stress ratio, which allows us to draw a normalized Mohr's circle. Assuming that a faulting occurs when the ratio of shear stress to normal stress on it, i.e., the slip tendency, exceeds the friction coefficient, one can find a linear boundary of distribution of points corresponding to faults on Mohr diagram. The slope of the boundary (friction envelope) provides the friction coefficient. This method has a difficulty in graphically and manually recognizing the linear boundary of distribution on the Mohr diagram. This study automated the determination of friction coefficient by considering the fluctuations of fluid pressure and differential stress. These unknown factors are expected to make difference in density of points representing faults on the Mohr diagram. Since the density is controlled by the friction coefficient, we can optimize the friction coefficient so as to explain the density distribution. The method was applied to two examples of natural meso-scale faults. The first example is from the Pleistocene Kazusa Group, central Japan, which filled a forearc basin of the Sagami Trough. Stress inversion analysis showed WNW-ENE trending tensional stress with a low stress ratio. The friction coefficient was determined to be around 0.66, which is typical value for sandstone. The Second example is from an underplated tectonic mélange in the Cretaceous to Paleogene Shimanto accretionary complex in southwest Japan along the Nankai Trough. The stress condition was determined to be an axial compression perpendicular to the foliation of shale matrix. The friction

  19. Extensible automated dispersive liquid–liquid microextraction

    Energy Technology Data Exchange (ETDEWEB)

    Li, Songqing; Hu, Lu; Chen, Ketao; Gao, Haixiang, E-mail: hxgao@cau.edu.cn

    2015-05-04

    Highlights: • An extensible automated dispersive liquid–liquid microextraction was developed. • A fully automatic SPE workstation with a modified operation program was used. • Ionic liquid-based in situ DLLME was used as model method. • SPE columns packed with nonwoven polypropylene fiber was used for phase separation. • The approach was applied to the determination of benzoylurea insecticides in water. - Abstract: In this study, a convenient and extensible automated ionic liquid-based in situ dispersive liquid–liquid microextraction (automated IL-based in situ DLLME) was developed. 1-Octyl-3-methylimidazolium bis[(trifluoromethane)sulfonyl]imide ([C{sub 8}MIM]NTf{sub 2}) is formed through the reaction between [C{sub 8}MIM]Cl and lithium bis[(trifluoromethane)sulfonyl]imide (LiNTf{sub 2}) to extract the analytes. Using a fully automatic SPE workstation, special SPE columns packed with nonwoven polypropylene (NWPP) fiber, and a modified operation program, the procedures of the IL-based in situ DLLME, including the collection of a water sample, injection of an ion exchange solvent, phase separation of the emulsified solution, elution of the retained extraction phase, and collection of the eluent into vials, can be performed automatically. The developed approach, coupled with high-performance liquid chromatography–diode array detection (HPLC–DAD), was successfully applied to the detection and concentration determination of benzoylurea (BU) insecticides in water samples. Parameters affecting the extraction performance were investigated and optimized. Under the optimized conditions, the proposed method achieved extraction recoveries of 80% to 89% for water samples. The limits of detection (LODs) of the method were in the range of 0.16–0.45 ng mL{sup −1}. The intra-column and inter-column relative standard deviations (RSDs) were <8.6%. Good linearity (r > 0.9986) was obtained over the calibration range from 2 to 500 ng mL{sup −1}. The proposed

  20. Automated carotid artery intima layer regional segmentation

    Science.gov (United States)

    Meiburger, Kristen M.; Molinari, Filippo; Rajendra Acharya, U.; Saba, Luca; Rodrigues, Paulo; Liboni, William; Nicolaides, Andrew; Suri, Jasjit S.

    2011-07-01

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge™ systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 ± 0.186 mm while FOAM showed -0.016 ± 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.