WorldWideScience

Sample records for surface identification algorithms

  1. Maritime over the Horizon Sensor Integration: High Frequency Surface-Wave-Radar and Automatic Identification System Data Integration Algorithm.

    Science.gov (United States)

    Nikolic, Dejan; Stojkovic, Nikola; Lekic, Nikola

    2018-04-09

    To obtain the complete operational picture of the maritime situation in the Exclusive Economic Zone (EEZ) which lies over the horizon (OTH) requires the integration of data obtained from various sensors. These sensors include: high frequency surface-wave-radar (HFSWR), satellite automatic identification system (SAIS) and land automatic identification system (LAIS). The algorithm proposed in this paper utilizes radar tracks obtained from the network of HFSWRs, which are already processed by a multi-target tracking algorithm and associates SAIS and LAIS data to the corresponding radar tracks, thus forming an integrated data pair. During the integration process, all HFSWR targets in the vicinity of AIS data are evaluated and the one which has the highest matching factor is used for data association. On the other hand, if there is multiple AIS data in the vicinity of a single HFSWR track, the algorithm still makes only one data pair which consists of AIS and HFSWR data with the highest mutual matching factor. During the design and testing, special attention is given to the latency of AIS data, which could be very high in the EEZs of developing countries. The algorithm is designed, implemented and tested in a real working environment. The testing environment is located in the Gulf of Guinea and includes a network of HFSWRs consisting of two HFSWRs, several coastal sites with LAIS receivers and SAIS data provided by provider of SAIS data.

  2. Star identification methods, techniques and algorithms

    CERN Document Server

    Zhang, Guangjun

    2017-01-01

    This book summarizes the research advances in star identification that the author’s team has made over the past 10 years, systematically introducing the principles of star identification, general methods, key techniques and practicable algorithms. It also offers examples of hardware implementation and performance evaluation for the star identification algorithms. Star identification is the key step for celestial navigation and greatly improves the performance of star sensors, and as such the book include the fundamentals of star sensors and celestial navigation, the processing of the star catalog and star images, star identification using modified triangle algorithms, star identification using star patterns and using neural networks, rapid star tracking using star matching between adjacent frames, as well as implementation hardware and using performance tests for star identification. It is not only valuable as a reference book for star sensor designers and researchers working in pattern recognition and othe...

  3. An Algorithm for Successive Identification of Reflections

    DEFF Research Database (Denmark)

    Hansen, Kim Vejlby; Larsen, Jan

    1994-01-01

    A new algorithm for successive identification of seismic reflections is proposed. Generally, the algorithm can be viewed as a curve matching method for images with specific structure. However, in the paper, the algorithm works on seismic signals assembled to constitute an image in which the inves......A new algorithm for successive identification of seismic reflections is proposed. Generally, the algorithm can be viewed as a curve matching method for images with specific structure. However, in the paper, the algorithm works on seismic signals assembled to constitute an image in which...... on a synthetic CMP gather, whereas the other is based on a real recorded CMP gather. Initially, the algorithm requires an estimate of the wavelet that can be performed by any wavelet estimation method.>...

  4. The TROPOMI surface UV algorithm

    Science.gov (United States)

    Lindfors, Anders V.; Kujanpää, Jukka; Kalakoski, Niilo; Heikkilä, Anu; Lakkala, Kaisa; Mielonen, Tero; Sneep, Maarten; Krotkov, Nickolay A.; Arola, Antti; Tamminen, Johanna

    2018-02-01

    The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload of the Sentinel-5 Precursor (S5P), which is a polar-orbiting satellite mission of the European Space Agency (ESA). TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI) and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF) algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i) daily dose or daily accumulated irradiance, (ii) overpass dose rate or irradiance, and (iii) local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2) satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  5. The TROPOMI surface UV algorithm

    Directory of Open Access Journals (Sweden)

    A. V. Lindfors

    2018-02-01

    Full Text Available The TROPOspheric Monitoring Instrument (TROPOMI is the only payload of the Sentinel-5 Precursor (S5P, which is a polar-orbiting satellite mission of the European Space Agency (ESA. TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i daily dose or daily accumulated irradiance, (ii overpass dose rate or irradiance, and (iii local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2 satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  6. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    2012-11-15

    Nov 15, 2012 ... from electrons, muons and hadronic jets. These algorithms enable extended reach for the searches for MSSM Higgs, Z and other exotic particles. Keywords. CMS; tau; LHC; ECAL; HCAL. PACS No. 13.35.Dx. 1. Introduction. Tau is the heaviest known lepton (Mτ = 1.78 GeV) which decays into lighter leptons.

  7. Dynamic hierarchical algorithm for accelerated microfossil identification

    Science.gov (United States)

    Wong, Cindy M.; Joseph, Dileepan

    2015-02-01

    Marine microfossils provide a useful record of the Earth's resources and prehistory via biostratigraphy. To study Hydrocarbon reservoirs and prehistoric climate, geoscientists visually identify the species of microfossils found in core samples. Because microfossil identification is labour intensive, automation has been investigated since the 1980s. With the initial rule-based systems, users still had to examine each specimen under a microscope. While artificial neural network systems showed more promise for reducing expert labour, they also did not displace manual identification for a variety of reasons, which we aim to overcome. In our human-based computation approach, the most difficult step, namely taxon identification is outsourced via a frontend website to human volunteers. A backend algorithm, called dynamic hierarchical identification, uses unsupervised, supervised, and dynamic learning to accelerate microfossil identification. Unsupervised learning clusters specimens so that volunteers need not identify every specimen during supervised learning. Dynamic learning means interim computation outputs prioritize subsequent human inputs. Using a dataset of microfossils identified by an expert, we evaluated correct and incorrect genus and species rates versus simulated time, where each specimen identification defines a moment. The proposed algorithm accelerated microfossil identification effectively, especially compared to benchmark results obtained using a k-nearest neighbour method.

  8. A Source Identification Algorithm for INTEGRAL

    Science.gov (United States)

    Scaringi, Simone; Bird, Antony J.; Clark, David J.; Dean, Anthony J.; Hill, Adam B.; McBride, Vanessa A.; Shaw, Simon E.

    2008-12-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. The key steps of candidate searching, filtering and feature extraction are described. Three training and testing sets are created in order to deal with the diverse timescales and diverse objects encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the Transient Matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples.

  9. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  10. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  11. ISINA: INTEGRAL Source Identification Network Algorithm

    Science.gov (United States)

    Scaringi, S.; Bird, A. J.; Clark, D. J.; Dean, A. J.; Hill, A. B.; McBride, V. A.; Shaw, S. E.

    2008-11-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using random forests, is applied to the IBIS/ISGRI data set in order to ease the production of unbiased future soft gamma-ray source catalogues. First, we introduce the data set and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse time-scales encountered when dealing with the gamma-ray sky. Three independent random forests are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the transient matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain), Czech Republic and Poland, and the participation of Russia and the USA. E-mail: simo@astro.soton.ac.uk

  12. Induction Motor Parameter Identification Using a Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Omar Avalos

    2016-04-01

    Full Text Available The efficient use of electrical energy is a topic that has attracted attention for its environmental consequences. On the other hand, induction motors represent the main component in most industries. They consume the highest energy percentages in industrial facilities. This energy consumption depends on the operation conditions of the induction motor imposed by its internal parameters. Since the internal parameters of an induction motor are not directly measurable, an identification process must be conducted to obtain them. In the identification process, the parameter estimation is transformed into a multidimensional optimization problem where the internal parameters of the induction motor are considered as decision variables. Under this approach, the complexity of the optimization problem tends to produce multimodal error surfaces for which their cost functions are significantly difficult to minimize. Several algorithms based on evolutionary computation principles have been successfully applied to identify the optimal parameters of induction motors. However, most of them maintain an important limitation: They frequently obtain sub-optimal solutions as a result of an improper equilibrium between exploitation and exploration in their search strategies. This paper presents an algorithm for the optimal parameter identification of induction motors. To determine the parameters, the proposed method uses a recent evolutionary method called the gravitational search algorithm (GSA. Different from most of the existent evolutionary algorithms, the GSA presents a better performance in multimodal problems, avoiding critical flaws such as the premature convergence to sub-optimal solutions. Numerical simulations have been conducted on several models to show the effectiveness of the proposed scheme.

  13. On flexible CAD of adaptive control and identification algorithms

    DEFF Research Database (Denmark)

    Christensen, Anders; Ravn, Ole

    1988-01-01

    SLLAB is a MATLAB-family software package for solving control and identification problems. This paper concerns the planning of a general-purpose subroutine structure for solving identification and adaptive control problems. A general-purpose identification algorithm is suggested, which allows a t...

  14. Parameters Identification of Photovoltaic Cells Based on Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Liao Hui

    2016-01-01

    Full Text Available For the complex nonlinear model of photovoltaic cells, traditional evolution strategy is easy to fall into the local optimal and its identification time is too long when taking parameters identification, then the difference algorithm is proposed in this study, which is to solve the problems of parameter identification in photovoltaic cell model, where it is very difficult to achieve with other identification algorithms. In this method, the random data is selected as the initial generation; the successful evolution to the next generation is done through a certain strategy of difference algorithm, which can achieve the effective identification of control parameters. It is proved that the method has a good global optimization and the fast convergence ability, and the simulation results are shown that the differential evolution has high identification ability and it is an effective method to identify the parameters of photovoltaic cells, where the photovoltaic cells can be widely used in other places with these parameters.

  15. System parameter identification information criteria and algorithms

    CERN Document Server

    Chen, Badong; Hu, Jinchun; Principe, Jose C

    2013-01-01

    Recently, criterion functions based on information theoretic measures (entropy, mutual information, information divergence) have attracted attention and become an emerging area of study in signal processing and system identification domain. This book presents a systematic framework for system identification and information processing, investigating system identification from an information theory point of view. The book is divided into six chapters, which cover the information needed to understand the theory and application of system parameter identification. The authors' research pr

  16. Time-Delay System Identification Using Genetic Algorithm

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Seested, Glen Thane

    2013-01-01

    Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique. The qual......Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique...

  17. Identification of bilinear systems using differential evolution algorithm

    Indian Academy of Sciences (India)

    Abstract. In this work, a novel identification method based on differential evolu- tion algorithm has been applied to bilinear systems and its performance has been compared to that of genetic algorithm. Box–Jenkins system and different type bilinear systems have been identified using differential evolution and genetic ...

  18. Parameter identification for structural dynamics based on interval analysis algorithm

    Science.gov (United States)

    Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke

    2018-04-01

    A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.

  19. Robust Kernel Clustering Algorithm for Nonlinear System Identification

    Directory of Open Access Journals (Sweden)

    Mohamed Bouzbida

    2017-01-01

    Full Text Available In engineering field, it is necessary to know the model of the real nonlinear systems to ensure its control and supervision; in this context, fuzzy modeling and especially the Takagi-Sugeno fuzzy model has drawn the attention of several researchers in recent decades owing to their potential to approximate nonlinear behavior. To identify the parameters of Takagi-Sugeno fuzzy model several clustering algorithms are developed such as the Fuzzy C-Means (FCM algorithm, Possibilistic C-Means (PCM algorithm, and Possibilistic Fuzzy C-Means (PFCM algorithm. This paper presents a new clustering algorithm for Takagi-Sugeno fuzzy model identification. Our proposed algorithm called Robust Kernel Possibilistic Fuzzy C-Means (RKPFCM algorithm is an extension of the PFCM algorithm based on kernel method, where the Euclidean distance used the robust hyper tangent kernel function. The proposed algorithm can solve the nonlinear separable problems found by FCM, PCM, and PFCM algorithms. Then an optimization method using the Particle Swarm Optimization (PSO method combined with the RKPFCM algorithm is presented to overcome the convergence to a local minimum of the objective function. Finally, validation results of examples are given to demonstrate the effectiveness, practicality, and robustness of our proposed algorithm in stochastic environment.

  20. Genetic Algorithm-Based Identification of Fractional-Order Systems

    Directory of Open Access Journals (Sweden)

    Shengxi Zhou

    2013-05-01

    Full Text Available Fractional calculus has become an increasingly popular tool for modeling the complex behaviors of physical systems from diverse domains. One of the key issues to apply fractional calculus to engineering problems is to achieve the parameter identification of fractional-order systems. A time-domain identification algorithm based on a genetic algorithm (GA is proposed in this paper. The multi-variable parameter identification is converted into a parameter optimization by applying GA to the identification of fractional-order systems. To evaluate the identification accuracy and stability, the time-domain output error considering the condition variation is designed as the fitness function for parameter optimization. The identification process is established under various noise levels and excitation levels. The effects of external excitation and the noise level on the identification accuracy are analyzed in detail. The simulation results show that the proposed method could identify the parameters of both commensurate rate and non-commensurate rate fractional-order systems from the data with noise. It is also observed that excitation signal is an important factor influencing the identification accuracy of fractional-order systems.

  1. A clinical algorithm for wound biofilm identification.

    Science.gov (United States)

    Metcalf, D G; Bowler, P G; Hurlow, J

    2016-03-01

    Recognition of the existence of biofilm in chronic wounds is increasing among wound care practitioners, and a growing body of evidence indicates that biofilm contributes significantly to wound recalcitrance. While clinical guidelines regarding the involvement of biofilm in human bacterial infections have been proposed, there remains uncertainty and lack of guidance towards biofilm presence in wounds. The intention of this report is to collate knowledge and evidence of the visual and indirect clinical indicators of wound biofilm, and propose an algorithm designed to facilitate clinical recognition of biofilm and subsequent wound management practices.

  2. Merged Search Algorithms for Radio Frequency Identification Anticollision

    Directory of Open Access Journals (Sweden)

    Bih-Yaw Shih

    2012-01-01

    The arbitration algorithm for RFID system is used to arbitrate all the tags to avoid the collision problem with the existence of multiple tags in the interrogation field of a transponder. A splitting algorithm which is called Binary Search Tree (BST is well known for multitags arbitration. In the current study, a splitting-based schema called Merged Search Tree is proposed to capture identification codes correctly for anticollision. Performance of the proposed algorithm is compared with the original BST according to time and power consumed during the arbitration process. The results show that the proposed model can reduce searching time and power consumed to achieve a better performance arbitration.

  3. A software tool for graphically assembling damage identification algorithms

    Science.gov (United States)

    Allen, David W.; Clough, Joshua A.; Sohn, Hoon; Farrar, Charles R.

    2003-08-01

    At Los Alamos National Laboratory (LANL), various algorithms for structural health monitoring problems have been explored in the last 5 to 6 years. The original DIAMOND (Damage Identification And MOdal aNalysis of Data) software was developed as a package of modal analysis tools with some frequency domain damage identification algorithms included. Since the conception of DIAMOND, the Structural Health Monitoring (SHM) paradigm at LANL has been cast in the framework of statistical pattern recognition, promoting data driven damage detection approaches. To reflect this shift and to allow user-friendly analyses of data, a new piece of software, DIAMOND II is under development. The Graphical User Interface (GUI) of the DIAMOND II software is based on the idea of GLASS (Graphical Linking and Assembly of Syntax Structure) technology, which is currently being implemented at LANL. GLASS is a Java based GUI that allows drag and drop construction of algorithms from various categories of existing functions. In the platform of the underlying GLASS technology, DIAMOND II is simply a module specifically targeting damage identification applications. Users can assemble various routines, building their own algorithms or benchmark testing different damage identification approaches without writing a single line of code.

  4. Time-Delay System Identification Using Genetic Algorithm

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Seested, Glen Thane

    2013-01-01

    problem through an identification approach using the real coded Genetic Algorithm (GA). The desired FOPDT/SOPDT model is directly identified based on the measured system's input and output data. In order to evaluate the quality and performance of this GA-based approach, the proposed method is compared...

  5. Proportionate Minimum Error Entropy Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Zongze Wu

    2015-08-01

    Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.

  6. Automatic identification of otological drilling faults: an intelligent recognition algorithm.

    Science.gov (United States)

    Cao, Tianyang; Li, Xisheng; Gao, Zhiqiang; Feng, Guodong; Shen, Peng

    2010-06-01

    This article presents an intelligent recognition algorithm that can recognize milling states of the otological drill by fusing multi-sensor information. An otological drill was modified by the addition of sensors. The algorithm was designed according to features of the milling process and is composed of a characteristic curve, an adaptive filter and a rule base. The characteristic curve can weaken the impact of the unstable normal milling process and reserve the features of drilling faults. The adaptive filter is capable of suppressing interference in the characteristic curve by fusing multi-sensor information. The rule base can identify drilling faults through the filtering result data. The experiments were repeated on fresh porcine scapulas, including normal milling and two drilling faults. The algorithm has high rates of identification. This study shows that the intelligent recognition algorithm can identify drilling faults under interference conditions. (c) 2010 John Wiley & Sons, Ltd.

  7. Biofilms and Wounds: An Identification Algorithm and Potential Treatment Options

    Science.gov (United States)

    Percival, Steven L.; Vuotto, Claudia; Donelli, Gianfranco; Lipsky, Benjamin A.

    2015-01-01

    Significance: The presence of a “pathogenic” or “highly virulent” biofilm is a fundamental risk factor that prevents a chronic wound from healing and increases the risk of the wound becoming clinically infected. There is presently no unequivocal gold standard method available for clinicians to confirm the presence of biofilms in a wound. Thus, to help support clinician practice, we devised an algorithm intended to demonstrate evidence of the presence of a biofilm in a wound to assist with wound management. Recent Advances: A variety of histological and microscopic methods applied to tissue biopsies are currently the most informative techniques available for demonstrating the presence of generic (not classified as pathogenic or commensal) biofilms and the effect they are having in promoting inflammation and downregulating cellular functions. Critical Issues: Even as we rely on microscopic techniques to visualize biofilms, they are entities which are patchy and dispersed rather than confluent, particularly on biotic surfaces. Consequently, detection of biofilms by microscopic techniques alone can lead to frequent false-negative results. Furthermore, visual identification using the naked eye of a pathogenic biofilm on a macroscopic level on the wound will not be possible, unlike with biofilms on abiotic surfaces. Future Direction: Lacking specific biomarkers to demonstrate microscopic, nonconfluent, virulent biofilms in wounds, the present focus on biofilm research should be placed on changing clinical practice. This is best done by utilizing an anti-biofilm toolbox approach, rather than speculating on unscientific approaches to identifying biofilms, with or without staining, in wounds with the naked eye. The approach to controlling biofilm should include initial wound cleansing, periodic debridement, followed by the application of appropriate antimicrobial wound dressings. This approach appears to be effective in removing pathogenic biofilms. PMID:26155381

  8. Particle identification algorithms for the PANDA Endcap Disc DIRC

    Science.gov (United States)

    Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.

    2017-12-01

    The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.

  9. An intelligent identification algorithm for the monoclonal picking instrument

    Science.gov (United States)

    Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun

    2017-11-01

    The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.

  10. Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meenakshi Panda

    2014-01-01

    Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.

  11. Comparing Different Fault Identification Algorithms in Distributed Power System

    Science.gov (United States)

    Alkaabi, Salim

    A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.

  12. LHCb - Novel Muon Identification Algorithms for the LHCb Upgrade

    CERN Multimedia

    Cogoni, Violetta

    2016-01-01

    The present LHCb Muon Identification procedure was optimised to guarantee high muon detection efficiency at the istantaneous luminosity $\\mathcal{L}$ of $2\\cdot10^{32}$~cm$^{-2}$~s$^{-1}$. In the current data taking conditions, the luminosity is higher than foreseen and the low energy background contribution to the visible rate in the muon system is larger than expected. A worse situation is expected for Run III when LHCb will operate at $\\mathcal{L} = 2\\cdot10^{33}$~cm$^{-2}$~s$^{-1}$ causing the high particle fluxes to deteriorate the muon detection efficiency, because of the increased dead time of the electronics, and in particular to worsen the muon identification capabilities, due to the increased contribution of the background, with deleterious consequences especially for the analyses requiring high purity signal. In this context, possible new algorithms for the muon identification will be illustrated. In particular, the performance on combinatorial background rejection will be shown, together with the ...

  13. A comparison of two open source LiDAR surface classification algorithms

    Science.gov (United States)

    With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are op...

  14. A comparison of two open source LiDAR surface classification algorithms

    Science.gov (United States)

    Wade T. Tinkham; Hongyu Huang; Alistair M.S. Smith; Rupesh Shrestha; Michael J. Falkowski; Andrew T. Hudak; Timothy E. Link; Nancy F. Glenn; Danny G. Marks

    2011-01-01

    With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results....

  15. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mărăscu, V.; Dinescu, G. [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania); Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Chiţescu, I. [Faculty of Mathematics and Computer Science, University of Bucharest, 14 Academiei Street, Bucharest (Romania); Barna, V. [Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Ioniţă, M. D.; Lazea-Stoyanova, A.; Mitu, B., E-mail: mitub@infim.ro [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania)

    2016-03-25

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.

  16. WATERSHED ALGORITHM BASED SEGMENTATION FOR HANDWRITTEN TEXT IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    P. Mathivanan

    2014-02-01

    Full Text Available In this paper we develop a system for writer identification which involves four processing steps like preprocessing, segmentation, feature extraction and writer identification using neural network. In the preprocessing phase the handwritten text is subjected to slant removal process for segmentation and feature extraction. After this step the text image enters into the process of noise removal and gray level conversion. The preprocessed image is further segmented by using morphological watershed algorithm, where the text lines are segmented into single words and then into single letters. The segmented image is feature extracted by Daubechies’5/3 integer wavelet transform to reduce training complexity [1, 6]. This process is lossless and reversible [10], [14]. These extracted features are given as input to our neural network for writer identification process and a target image is selected for each training process in the 2-layer neural network. With the several trained output data obtained from different target help in text identification. It is a multilingual text analysis which provides simple and efficient text segmentation.

  17. Improved chemical identification from sensor arrays using intelligent algorithms

    Science.gov (United States)

    Roppel, Thaddeus A.; Wilson, Denise M.

    2001-02-01

    Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.

  18. Mode Identification of Guided Ultrasonic Wave using Time- Frequency Algorithm

    International Nuclear Information System (INIS)

    Yoon, Byung Sik; Yang, Seung Han; Cho, Yong Sang; Kim, Yong Sik; Lee, Hee Jong

    2007-01-01

    The ultrasonic guided waves are waves whose propagation characteristics depend on structural thickness and shape such as those in plates, tubes, rods, and embedded layers. If the angle of incidence or the frequency of sound is adjusted properly, the reflected and refracted energy within the structure will constructively interfere, thereby launching the guided wave. Because these waves penetrate the entire thickness of the tube and propagate parallel to the surface, a large portion of the material can be examined from a single transducer location. The guided ultrasonic wave has various merits like above. But various kind of modes are propagating through the entire thickness, so we don't know the which mode is received. Most of applications are limited from mode selection and mode identification. So the mode identification is very important process for guided ultrasonic inspection application. In this study, various time-frequency analysis methodologies are developed and compared for mode identification tool of guided ultrasonic signal. For this study, a high power tone-burst ultrasonic system set up for the generation and receive of guided waves. And artificial notches were fabricated on the Aluminum plate for the experiment on the mode identification

  19. Improved efficient proportionate affine projection algorithm based on l(0-norm for sparse system identification

    Directory of Open Access Journals (Sweden)

    Haiquan Zhao

    2014-01-01

    Full Text Available A new improved memorised improved proportionate affine projection algorithm (IMIPAPA is proposed to improve the convergence performance of sparse system identification, which incorporates l(0-norm as a measure of sparseness into a recently proposed MIPAPA algorithm. In addition, a simplified implementation of the IMIPAPA (SIMIPAPA with low-computational burden is presented while maintaining the consistent convergence performance. The simulation results demonstrate that the IMIPAPA and SIMIPAPA algorithms outperform the MIPAPA algorithm for sparse system identification.

  20. Application of evolutionary algorithm for cast iron latent heat identification

    Directory of Open Access Journals (Sweden)

    J. Mendakiewicz

    2008-12-01

    Full Text Available In the paper the cast iron latent heat in the form of two components corresponding to the solidification of austenite and eutectic phases is assumed. The aim of investigations is to estimate the values of austenite and eutectic latent heats on the basis of cooling curve at the central point of the casting domain. This cooling curve has been obtained both on the basis of direct problem solution as well as from the experiment. To solve such inverse problem the evolutionary algorithm (EA has been applied. The numerical computations have been done using the finite element method by means of commercial software MSC MARC/MENTAT. In the final part of the paper the examples of identification are shown.

  1. Identification tibia and fibula bone fracture location using scanline algorithm

    Science.gov (United States)

    Muchtar, M. A.; Simanjuntak, S. E.; Rahmat, R. F.; Mawengkang, H.; Zarlis, M.; Sitompul, O. S.; Winanto, I. D.; Andayani, U.; Syahputra, M. F.; Siregar, I.; Nasution, T. H.

    2018-03-01

    Fracture is a condition that there is a damage in the continuity of the bone, usually caused by stress, trauma or weak bones. The tibia and fibula are two separated-long bones in the lower leg, closely linked at the knee and ankle. Tibia/fibula fracture often happen when there is too much force applied to the bone that it can withstand. One of the way to identify the location of tibia/fibula fracture is to read X-ray image manually. Visual examination requires more time and allows for errors in identification due to the noise in image. In addition, reading X-ray needs highlighting background to make the objects in X-ray image appear more clearly. Therefore, a method is required to help radiologist to identify the location of tibia/fibula fracture. We propose some image-processing techniques for processing cruris image and Scan line algorithm for the identification of fracture location. The result shows that our proposed method is able to identify it and reach up to 87.5% of accuracy.

  2. Surface-Heating Algorithm for Water at Nanoscale.

    Science.gov (United States)

    Y D, Sumith; Maroo, Shalabh C

    2015-09-17

    A novel surface-heating algorithm for water is developed for molecular dynamics simulations. The validated algorithm can simulate the transient behavior of the evaporation of water when heated from a surface, which has been lacking in the literature. In this work, the algorithm is used to study the evaporation of water droplets on a platinum surface at different temperatures. The resulting contact angles of the droplets are compared to existing theoretical, numerical, and experimental studies. The evaporation profile along the droplet's radius and height is deduced along with the temperature gradient within the drop, and the evaporation behavior conforms to the Kelvin-Clapeyron theory. The algorithm captures the realistic differential thermal gradient in water heated at the surface and is promising for studying various heating/cooling problems, such as thin film evaporation, Leidenfrost effect, and so forth. The simplicity of the algorithm allows it to be easily extended to other surfaces and integrated into various molecular simulation software and user codes.

  3. Algorithm for identification of undifferentiated peripheral inflammatory arthritis: a multinational collaboration through the 3e initiative

    NARCIS (Netherlands)

    Hazlewood, Glen; Aletaha, Daniel; Carmona, Loreto; Landewé, Robert B. M.; van der Heijde, Désirée M.; Bijlsma, Johannes W. J.; Bykerk, Vivian P.; Canhão, Helena; Catrina, Anca I.; Durez, Patrick; Edwards, Christopher J.; Leeb, Burkhard F.; Mjaavatten, Maria D.; Martinez-Osuna, Pindaro; Montecucco, Carlomaurizio; Ostergaard, Mikkel; Serra-Bonett, Natali; Xavier, Ricardo M.; Zochling, Jane; Machado, Pedro; Thevissen, Kristof; Vercoutere, Ward; Bombardier, Claire

    2011-01-01

    To develop an algorithm for identification of undifferentiated peripheral inflammatory arthritis (UPIA). An algorithm for identification of UPIA was developed by consensus during a roundtable meeting with an expert panel. It was informed by systematic reviews of the literature used to generate 10

  4. Firefly Algorithm for Polynomial Bézier Surface Parameterization

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.

  5. Algorithms and tools for system identification using prior knowledge

    International Nuclear Information System (INIS)

    Lindskog, P.

    1994-01-01

    One of the hardest problems in system identification is that of model structure selection. In this thesis two different kinds of a priori process knowledge are used to address this fundamental problem. Concentrating on linear model structures, the first prior advantage of is knowledge about the systems' dominating time constants and resonance frequencies. The idea is to generalize FIR modelling by replacing the usual delay operator with discrete so-called Laguerre or Kautz filters. The generalization is such that stability, the linear regression structure and the approximation ability of the FIR model structure is retained, whereas the prior is used to reduce the number of parameters needed to arrive at a reasonable model. Tailorized and efficient system identification algorithms for these model structures are detailed in this work. The usefulness of the proposed methods is demonstrated through concrete simulation and application studies. The other approach is referred to as semi-physical modelling. The main idea is to use simple physical insight into the application, often in terms of a set of unstructured equations, in order to come up with suitable nonlinear transformation of the raw measurements, so as to allow for a good model structure. Semi-physical modelling is less ''ambitious'' than physical modelling in that no complete physical structure is sought, just combinations of inputs and outputs that can be subjected to more or less standard model structures, such as linear regressions. The suggested modelling procedure shows a first step where symbolic computations are employed to determine a suitable model structure - a set of regressors. We show how constructive methods from commutative and differential algebra can be applied for this. Subsequently, different numerical schemes for finding a subset of ''good'' regressors and for estimating the corresponding linear-in-the-parameters model are discussed. 107 refs, figs, tabs

  6. A Low Delay and Fast Converging Improved Proportionate Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Andy W. H. Khong

    2007-04-01

    Full Text Available A sparse system identification algorithm for network echo cancellation is presented. This new approach exploits both the fast convergence of the improved proportionate normalized least mean square (IPNLMS algorithm and the efficient implementation of the multidelay adaptive filtering (MDF algorithm inheriting the beneficial properties of both. The proposed IPMDF algorithm is evaluated using impulse responses with various degrees of sparseness. Simulation results are also presented for both speech and white Gaussian noise input sequences. It has been shown that the IPMDF algorithm outperforms the MDF and IPNLMS algorithms for both sparse and dispersive echo path impulse responses. Computational complexity of the proposed algorithm is also discussed.

  7. A Low Delay and Fast Converging Improved Proportionate Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Benesty Jacob

    2007-01-01

    Full Text Available A sparse system identification algorithm for network echo cancellation is presented. This new approach exploits both the fast convergence of the improved proportionate normalized least mean square (IPNLMS algorithm and the efficient implementation of the multidelay adaptive filtering (MDF algorithm inheriting the beneficial properties of both. The proposed IPMDF algorithm is evaluated using impulse responses with various degrees of sparseness. Simulation results are also presented for both speech and white Gaussian noise input sequences. It has been shown that the IPMDF algorithm outperforms the MDF and IPNLMS algorithms for both sparse and dispersive echo path impulse responses. Computational complexity of the proposed algorithm is also discussed.

  8. A voting-based star identification algorithm utilizing local and global distribution

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua

    2018-03-01

    A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.

  9. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  10. Automated landmark identification for human cortical surface-based registration.

    Science.gov (United States)

    Anticevic, Alan; Repovs, Grega; Dierker, Donna L; Harwell, John W; Coalson, Timothy S; Barch, Deanna M; Van Essen, David C

    2012-02-01

    Volume-based registration (VBR) is the predominant method used in human neuroimaging to compensate for individual variability. However, surface-based registration (SBR) techniques have an inherent advantage over VBR because they respect the topology of the convoluted cortical sheet. There is evidence that existing SBR methods indeed confer a registration advantage over affine VBR. Landmark-SBR constrains registration using explicit landmarks to represent corresponding geographical locations on individual and atlas surfaces. The need for manual landmark identification has been an impediment to the widespread adoption of Landmark-SBR. To circumvent this obstacle, we have implemented and evaluated an automated landmark identification (ALI) algorithm for registration to the human PALS-B12 atlas. We compared ALI performance with that from two trained human raters and one expert anatomical rater (ENR). We employed both quantitative and qualitative quality assurance metrics, including a biologically meaningful analysis of hemispheric asymmetry. ALI performed well across all quality assurance tests, indicating that it yields robust and largely accurate results that require only modest manual correction (<10 min per subject). ALI largely circumvents human error and bias and enables high throughput analysis of large neuroimaging datasets for inter-subject registration to an atlas. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. A genetic algorithm approach in interface and surface structure optimization

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jian [Iowa State Univ., Ames, IA (United States)

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  12. A Novel Algorithm of Surface Eliminating in Undersurface Optoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Zhulina Yulia V

    2004-01-01

    Full Text Available This paper analyzes the task of optoacoustic imaging of the objects located under the surface covering them. In this paper, we suggest the algorithm of the surface eliminating based on the fact that the intensity of the image as a function of the spatial point should change slowly inside the local objects, and will suffer a discontinuity of the spatial gradients on their boundaries. The algorithm forms the 2-dimensional curves along which the discontinuity of the signal derivatives is detected. Then, the algorithm divides the signal space into the areas along these curves. The signals inside the areas with the maximum level of the signal amplitudes and the maximal gradient absolute values on their edges are put equal to zero. The rest of the signals are used for the image restoration. This method permits to reconstruct the picture of the surface boundaries with a higher contrast than that of the surface detection technique based on the maximums of the received signals. This algorithm does not require any prior knowledge of the signals' statistics inside and outside the local objects. It may be used for reconstructing any images with the help of the signals representing the integral over the object's volume. Simulation and real data are also provided to validate the proposed method.

  13. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    Science.gov (United States)

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  14. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    Science.gov (United States)

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  15. Identification of fast-steering mirror based on chicken swarm optimization algorithm

    Science.gov (United States)

    Ren, Wei; Deng, Chao; Zhang, Chao; Mao, Yao

    2017-06-01

    According to the transfer function identification method of fast steering mirror exists problems which estimate the initial value is complicated in the process of using, put forward using chicken swarm algorithm to simplify the identification operation, reducing the workload of identification. chicken swarm algorithm is a meta heuristic intelligent population algorithm, which shows global convergence is efficient in the identification experiment, and the convergence speed is fast. The convergence precision is also high. Especially there are many parameters are needed to identificate in the transfer function without considering the parameters estimation problem. Therefore, compared with the traditional identification methods, the proposed approach is more convenient, and greatly achieves the intelligent design of fast steering mirror control system in enginerring application, shorten time of controller designed.

  16. Identification of Fuzzy Inference Systems by Means of a Multiobjective Opposition-Based Space Search Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2013-01-01

    Full Text Available We introduce a new category of fuzzy inference systems with the aid of a multiobjective opposition-based space search algorithm (MOSSA. The proposed MOSSA is essentially a multiobjective space search algorithm improved by using an opposition-based learning that employs a so-called opposite numbers mechanism to speed up the convergence of the optimization algorithm. In the identification of fuzzy inference system, the MOSSA is exploited to carry out the parametric identification of the fuzzy model as well as to realize its structural identification. Experimental results demonstrate the effectiveness of the proposed fuzzy models.

  17. Global structual optimizations of surface systems with a genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chuang, Feng-Chuan [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Aln algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.

  18. Verification of Single-Peptide Protein Identifications by the Application of Complementary Database Search Algorithms

    National Research Council Canada - National Science Library

    Rohrbough, James G; Breci, Linda; Merchant, Nirav; Miller, Susan; Haynes, Paul A

    2005-01-01

    .... One such technique, known as the Multi-Dimensional Protein Identification Technique, or MudPIT, involves the use of computer search algorithms that automate the process of identifying proteins...

  19. Particle Identification algorithm for the CLIC ILD and CLIC SiD detectors

    CERN Document Server

    Nardulli, J

    2011-01-01

    This note describes the algorithm presently used to determine the particle identification performance for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared in the CLIC Conceptual Design Report.

  20. Particle mis-identification rate algorithm for the CLIC ILD and CLIC SiD detectors

    CERN Document Server

    Nardulli, J

    2011-01-01

    This note describes the algorithm presently used to determine the particle mis- identification rate and gives results for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared for the CLIC Conceptual Design Report.

  1. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    Science.gov (United States)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  2. Improved gravitational search algorithm for parameter identification of water turbine regulation system

    International Nuclear Information System (INIS)

    Chen, Zhihuan; Yuan, Xiaohui; Tian, Hao; Ji, Bin

    2014-01-01

    Highlights: • We propose an improved gravitational search algorithm (IGSA). • IGSA is applied to parameter identification of water turbine regulation system (WTRS). • WTRS is modeled by considering the impact of turbine speed on torque and water flow. • Weighted objective function strategy is applied to parameter identification of WTRS. - Abstract: Parameter identification of water turbine regulation system (WTRS) is crucial in precise modeling hydropower generating unit (HGU) and provides support for the adaptive control and stability analysis of power system. In this paper, an improved gravitational search algorithm (IGSA) is proposed and applied to solve the identification problem for WTRS system under load and no-load running conditions. This newly algorithm which is based on standard gravitational search algorithm (GSA) accelerates convergence speed with combination of the search strategy of particle swarm optimization and elastic-ball method. Chaotic mutation which is devised to stepping out the local optimal with a certain probability is also added into the algorithm to avoid premature. Furthermore, a new kind of model associated to the engineering practices is built and analyzed in the simulation tests. An illustrative example for parameter identification of WTRS is used to verify the feasibility and effectiveness of the proposed IGSA, as compared with standard GSA and particle swarm optimization in terms of parameter identification accuracy and convergence speed. The simulation results show that IGSA performs best for all identification indicators

  3. DNA evolutionary algorithm (DNAEA) for source term identification in convection-diffusion equation

    International Nuclear Information System (INIS)

    Yang, X-H; Hu, X-X; Shen, Z-Y

    2008-01-01

    The source identification problem is changed into an optimization problem in this paper. This is a complicated nonlinear optimization problem. It is very intractable with traditional optimization methods. So DNA evolutionary algorithm (DNAEA) is presented to solve the discussed problem. In this algorithm, an initial population is generated by a chaos algorithm. With the shrinking of searching range, DNAEA gradually directs to an optimal result with excellent individuals obtained by DNAEA. The position and intensity of pollution source are well found with DNAEA. Compared with Gray-coded genetic algorithm and pure random search algorithm, DNAEA has rapider convergent speed and higher calculation precision

  4. Regularization algorithm within two-parameters for identification heat-coefficient in the parabolic equation

    International Nuclear Information System (INIS)

    Hinestroza Gutierrez, D.

    2006-08-01

    In this work a new and promising algorithm based on the minimization of especial functional that depends on two regularization parameters is considered for the identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)

  5. Regularization algorithm within two-parameters for identification heat-coefficient in the parabolic equation

    International Nuclear Information System (INIS)

    Hinestroza Gutierrez, D.

    2006-12-01

    In this work a new and promising algorithm based in the minimization of especial functional that depends on two regularization parameters is considered for identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)

  6. Identification of vehicles moving on continuous bridges with rough surface

    Science.gov (United States)

    Jiang, R. J.; Au, F. T. K.; Cheung, Y. K.

    2004-07-01

    This paper describes the parameter identification of vehicles moving on multi-span continuous bridges taking into account the surface roughness. Each moving vehicle is modelled as a two-degree-of-freedom system that comprises five components: a lower mass and an upper mass, which are connected together by a damper and a spring, together with another spring to represent the contact stiffness between the tyres and the bridge deck. The corresponding parameters of these five components, namely, the equivalent values of the two masses, the damping coefficient, and the two spring stiffnesses together with the roughness parameters are identified based on dynamic simulation of the vehicle-bridge system. In the study, the accelerations at selected measurement stations are simulated from the dynamic analysis of a continuous beam under moving vehicles taking into account randomly generated bridge surface roughness, together with the addition of artificially generated measurement noise. The identification is realized through a robust multi-stage optimization scheme based on genetic algorithms, which searches for the best estimates of parameters by minimizing the errors between the measured accelerations and the reconstructed accelerations from the identified parameters. Starting from the very wide initial variable domains, this multi-stage optimization scheme reduces the variable search domains stage by stage using the identified results of the previous stage. A few test cases are carried out to verify the efficiency of the multi-stage optimization procedure. The identified parameters are also used to estimate the time-varying contact forces between the vehicles and the bridge.

  7. Surface solar irradiance from SCIAMACHY measurements: algorithm and validation

    Directory of Open Access Journals (Sweden)

    P. Wang

    2011-05-01

    Full Text Available Broadband surface solar irradiances (SSI are, for the first time, derived from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY satellite measurements. The retrieval algorithm, called FRESCO (Fast REtrieval Scheme for Clouds from the Oxygen A band SSI, is similar to the Heliosat method. In contrast to the standard Heliosat method, the cloud index is replaced by the effective cloud fraction derived from the FRESCO cloud algorithm. The MAGIC (Mesoscale Atmospheric Global Irradiance Code algorithm is used to calculate clear-sky SSI. The SCIAMACHY SSI product is validated against globally distributed BSRN (Baseline Surface Radiation Network measurements and compared with ISCCP-FD (International Satellite Cloud Climatology Project Flux Dataset surface shortwave downwelling fluxes (SDF. For one year of data in 2008, the mean difference between the instantaneous SCIAMACHY SSI and the hourly mean BSRN global irradiances is −4 W m−2 (−1 % with a standard deviation of 101 W m−2 (20 %. The mean difference between the globally monthly mean SCIAMACHY SSI and ISCCP-FD SDF is less than −12 W m−2 (−2 % for every month in 2006 and the standard deviation is 62 W m−2 (12 %. The correlation coefficient is 0.93 between SCIAMACHY SSI and BSRN global irradiances and is greater than 0.96 between SCIAMACHY SSI and ISCCP-FD SDF. The evaluation results suggest that the SCIAMACHY SSI product achieves similar mean bias error and root mean square error as the surface solar irradiances derived from polar orbiting satellites with higher spatial resolution.

  8. Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm

    International Nuclear Information System (INIS)

    Canedo Medeiros, Jose Antonio Carlos; Schirru, Roberto

    2008-01-01

    In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results

  9. Adaptive Kernel Canonical Correlation Analysis Algorithms for Nonparametric Identification of Wiener and Hammerstein Systems

    Directory of Open Access Journals (Sweden)

    Ignacio Santamaría

    2008-04-01

    Full Text Available This paper treats the identification of nonlinear systems that consist of a cascade of a linear channel and a nonlinearity, such as the well-known Wiener and Hammerstein systems. In particular, we follow a supervised identification approach that simultaneously identifies both parts of the nonlinear system. Given the correct restrictions on the identification problem, we show how kernel canonical correlation analysis (KCCA emerges as the logical solution to this problem. We then extend the proposed identification algorithm to an adaptive version allowing to deal with time-varying systems. In order to avoid overfitting problems, we discuss and compare three possible regularization techniques for both the batch and the adaptive versions of the proposed algorithm. Simulations are included to demonstrate the effectiveness of the presented algorithm.

  10. Fully automated algorithm for wound surface area assessment.

    Science.gov (United States)

    Deana, Alessandro Melo; de Jesus, Sérgio Henrique Costa; Sampaio, Brunna Pileggi Azevedo; Oliveira, Marcelo Tavares; Silva, Daniela Fátima Teixeira; França, Cristiane Miranda

    2013-01-01

    Worldwide, clinicians, dentists, nurses, researchers, and other health professionals need to monitor the wound healing progress and to quantify the rate of wound closure. The aim of this study is to demonstrate, step by step, a fully automated numerical method to estimate the size of the wound and the percentage damaged relative to the body surface area (BSA) in images, without the requirement for human intervention. We included the formula for BSA in rats in the algorithm. The methodology was validated in experimental wounds and human ulcers and was compared with the analysis of an experienced pathologist, with good agreement. Therefore, this algorithm is suitable for experimental wounds and burns and human ulcers, as they have a high contrast with adjacent normal skin. © 2013 by the Wound Healing Society.

  11. Word-length algorithm for language identification of under-resourced languages

    Directory of Open Access Journals (Sweden)

    Ali Selamat

    2016-10-01

    Full Text Available Language identification is widely used in machine learning, text mining, information retrieval, and speech processing. Available techniques for solving the problem of language identification do require large amount of training text that are not available for under-resourced languages which form the bulk of the World’s languages. The primary objective of this study is to propose a lexicon based algorithm which is able to perform language identification using minimal training data. Because language identification is often the first step in many natural language processing tasks, it is necessary to explore techniques that will perform language identification in the shortest possible time. Hence, the second objective of this research is to study the effect of the proposed algorithm on the run-time performance of language identification. Precision, recall, and F1 measures were used to determine the effectiveness of the proposed word length algorithm using datasets drawn from the Universal Declaration of Human Rights Act in 15 languages. The experimental results show good accuracy on language identification at the document level and at the sentence level based on the available dataset. The improved algorithm also showed significant improvement in run time performance compared with the spelling checker approach.

  12. Augmenting real data with synthetic data: an application in assessing radio-isotope identification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom L [Los Alamos National Laboratory; Hamada, Michael [Los Alamos National Laboratory; Graves, Todd [Los Alamos National Laboratory; Myers, Steve [Los Alamos National Laboratory

    2008-01-01

    The performance of Radio-Isotope Identification (RIID) algorithms using gamma spectroscopy is increasingly important. For example, sensors at locations that screen for illicit nuclear material rely on isotope identification to resolve innocent nuisance alarms arising from naturally occurring radioactive material. Recent data collections for RIID testing consist of repeat measurements for each of several scenarios to test RIID algorithms. Efficient allocation of measurement resources requires an appropriate number of repeats for each scenario. To help allocate measurement resources in such data collections for RIID algorithm testing, we consider using only a few real repeats per scenario. In order to reduce uncertainty in the estimated RIID algorithm performance for each scenario, the potential merit of augmenting these real repeats with realistic synthetic repeats is also considered. Our results suggest that for the scenarios and algorithms considered, approximately 10 real repeats augmented with simulated repeats will result in an estimate having comparable uncertainty to the estimate based on using 60 real repeats.

  13. An AUTONOMOUS STAR IDENTIFICATION ALGORITHM BASED ON THE DIRECTED CIRCULARITY PATTERN

    Directory of Open Access Journals (Sweden)

    J. Xie

    2012-07-01

    Full Text Available The accuracy of the angular distance may decrease due to lots of factors, such as the parameters of the stellar camera aren't calibrated on-orbit, or the location accuracy of the star image points is low, and so on, which can cause the low success rates of star identification. A robust directed circularity pattern algorithm is proposed in this paper, which is developed on basis of the matching probability algorithm. The improved algorithm retains the matching probability strategy to identify master star, and constructs a directed circularity pattern with the adjacent stars for unitary matching. The candidate matching group which has the longest chain will be selected as the final result. Simulation experiments indicate that the improved algorithm has high successful identification and reliability etc, compared with the original algorithm. The experiments with real data are used to verify it.

  14. A General Zero Attraction Proportionate Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Yingsong Li

    2017-10-01

    Full Text Available A general zero attraction (GZA proportionate normalized maximum correntropy criterion (GZA-PNMCC algorithm is devised and presented on the basis of the proportionate-type adaptive filter techniques and zero attracting theory to highly improve the sparse system estimation behavior of the classical MCC algorithm within the framework of the sparse system identifications. The newly-developed GZA-PNMCC algorithm is carried out by introducing a parameter adjusting function into the cost function of the typical proportionate normalized maximum correntropy criterion (PNMCC to create a zero attraction term. The developed optimization framework unifies the derivation of the zero attraction-based PNMCC algorithms. The developed GZA-PNMCC algorithm further exploits the impulsive response sparsity in comparison with the proportionate-type-based NMCC algorithm due to the GZA zero attraction. The superior performance of the GZA-PNMCC algorithm for estimating a sparse system in a non-Gaussian noise environment is proven by simulations.

  15. Neuro-diffuse algorithm for neutronic power identification of TRIGA Mark III reactor

    International Nuclear Information System (INIS)

    Rojas R, E.; Benitez R, J. S.; Segovia de los Rios, J. A.; Rivero G, T.

    2009-10-01

    In this work are presented the results of design and implementation of an algorithm based on diffuse logic systems and neural networks like method of neutronic power identification of TRIGA Mark III reactor. This algorithm uses the punctual kinetics equation as data generator of training, a cost function and a learning stage based on the descending gradient algorithm allow to optimize the parameters of membership functions of a diffuse system. Also, a series of criteria like part of the initial conditions of training algorithm are established. These criteria according to the carried out simulations show a quick convergence of neutronic power estimated from the first iterations. (Author)

  16. A simple algorithm for the identification of clinical COPD phenotypes

    DEFF Research Database (Denmark)

    Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim

    2017-01-01

    International Assessment (3CIA) initiative. Cluster analysis identified five subgroups of COPD patients with different clinical characteristics (especially regarding severity of respiratory disease and the presence of cardiovascular comorbidities and diabetes). The CART-based algorithm indicated...... that the variables relevant for patient grouping differed markedly between patients with isolated respiratory disease (FEV1, dyspnoea grade) and those with multi-morbidity (dyspnoea grade, age, FEV1 and body mass index). Application of this algorithm to the 3CIA cohorts confirmed that it identified subgroups...

  17. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    Science.gov (United States)

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  18. Identification of partial blockages in pipelines using genetic algorithms

    Indian Academy of Sciences (India)

    A methodology to identify the partial blockages in a simple pipeline using genetic algorithms for non-harmonic flows is presented in this paper. A sinusoidal flow generated by the periodic on-and-off operation of a valve at the outlet is investigated in the time domain and it is observed that pressure variation at the valve is ...

  19. A simple algorithm for the identification of clinical COPD phenotypes

    NARCIS (Netherlands)

    Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim; Piquet, Jacques; ter Riet, Gerben; Garcia-Aymerich, Judith; Cosio, Borja; Bakke, Per; Puhan, Milo A.; Langhammer, Arnulf; Alfageme, Inmaculada; Almagro, Pere; Ancochea, Julio; Celli, Bartolome R.; Casanova, Ciro; de-Torres, Juan P.; Decramer, Marc; Echazarreta, Andrés; Esteban, Cristobal; Gomez Punter, Rosa Mar; Han, MeiLan K.; Johannessen, Ane; Kaiser, Bernhard; Lamprecht, Bernd; Lange, Peter; Leivseth, Linda; Marin, Jose M.; Martin, Francis; Martinez-Camblor, Pablo; Miravitlles, Marc; Oga, Toru; Sofia Ramírez, Ana; Sin, Don D.; Sobradillo, Patricia; Soler-Cataluña, Juan J.; Turner, Alice M.; Verdu Rivera, Francisco Javier; Soriano, Joan B.; Roche, Nicolas

    2017-01-01

    This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification of

  20. Tau Reconstruction, Identification Algorithms and Performance in ATLAS

    DEFF Research Database (Denmark)

    Simonyan, M.

    2013-01-01

    identification of hadronically decaying tau leptons is achieved by using detailed information from tracking and calorimeter detector components. Variables describing the properties of calorimeter energy deposits and track reconstruction within tau candidates are combined in multi-variate discriminants...... by investigating single hadron calorimeter response, as well as kinematic distributions in Z¿ tt events....

  1. Monitoring Antarctic ice sheet surface melting with TIMESAT algorithm

    Science.gov (United States)

    Ye, Y.; Cheng, X.; Li, X.; Liang, L.

    2011-12-01

    Antarctic ice sheet contributes significantly to the global heat budget by controlling the exchange of heat, moisture, and momentum at the surface-atmosphere interface, which directly influence the global atmospheric circulation and climate change. Ice sheet melting will cause snow humidity increase, which will accelerate the disintegration and movement of ice sheet. As a result, detecting Antarctic ice sheet melting is essential for global climate change research. In the past decades, various methods have been proposed for extracting snowmelt information from multi-channel satellite passive microwave data. Some methods are based on brightness temperature values or a composite index of them, and others are based on edge detection. TIMESAT (Time-series of Satellite sensor data) is an algorithm for extracting seasonality information from time-series of satellite sensor data. With TIMESAT long-time series brightness temperature (SSM/I 19H) is simulated by Double Logistic function. Snow is classified to wet and dry snow with generalized Gaussian model. The results were compared with those from a wavelet algorithm. On this basis, Antarctic automatic weather station data were used for ground verification. It shows that this algorithm is effective in ice sheet melting detection. The spatial distribution of melting areas(Fig.1) shows that, the majority of melting areas are located on the edge of Antarctic ice shelf region. It is affected by land cover type, surface elevation and geographic location (latitude). In addition, the Antarctic ice sheet melting varies with seasons. It is particularly acute in summer, peaking at December and January, staying low in March. In summary, from 1988 to 2008, Ross Ice Shelf and Ronnie Ice Shelf have the greatest interannual variability in amount of melting, which largely determines the overall interannual variability in Antarctica. Other regions, especially Larsen Ice Shelf and Wilkins Ice Shelf, which is in the Antarctic Peninsula

  2. Computer vision algorithm for diabetic foot injury identification and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R., E-mail: lsolis@uaz.edu.mx [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  3. Computer vision algorithm for diabetic foot injury identification and evaluation

    International Nuclear Information System (INIS)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R.

    2016-10-01

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  4. PyBact: an algorithm for bacterial identification.

    Science.gov (United States)

    Nantasenamat, Chanin; Preeyanon, Likit; Isarankura-Na-Ayudhya, Chartchalerm; Prachayasittikul, Virapong

    2011-01-01

    PyBact is a software written in Python for bacterial identification. The code simulates the predefined behavior of bacterial species by generating a simulated data set based on the frequency table of biochemical tests from diagnostic microbiology textbook. The generated data was used for predictive model construction by machine learning approaches and results indicated that the classifiers could accurately predict its respective bacterial class with accuracy in excess of 99 %.

  5. Performance study of LMS based adaptive algorithms for unknown system identification

    International Nuclear Information System (INIS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-01-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment

  6. Parameter Identification of Steam Turbine Speed Governing System Using an Improved Gravitational Search Algorithm

    Science.gov (United States)

    Zhong, Jing-liang; Deng, Tong-tian; Wang, Jia-sheng

    2017-05-01

    Since most of the traditional parameter identification methods used in the steam turbine speed governing system (STSGS) have the shortages of great work load, poor fitness and long period by hand, a novel improved gravitational search algorithm (VGSA) method, whose gravitational parameter can be dynamically adjusted according to the current fitness and search space will keep being more and more narrow during the iteration process, is proposed in this paper based on an improved gravitational search algorithm (IGSA). The performance of this new method was identified through the comparisons of the steam turbine speed governing system identification results with IGSA using the measured data from a 600MW and a 300MW thermal power unit. The results show that the new method VGSA has the features of higher precision and higher speed during the identification process, and it brings a new scheme for steam turbine speed governing system identification.

  7. A gradient based algorithm to solve inverse plane bimodular problems of identification

    Science.gov (United States)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  8. Bias-Compensated Normalized Maximum Correntropy Criterion Algorithm for System Identification with Noisy Input

    OpenAIRE

    Ma, Wentao; Zheng, Dongqiao; Li, Yuanhao; Zhang, Zhiyu; Chen, Badong

    2017-01-01

    This paper proposed a bias-compensated normalized maximum correntropy criterion (BCNMCC) algorithm charactered by its low steady-state misalignment for system identification with noisy input in an impulsive output noise environment. The normalized maximum correntropy criterion (NMCC) is derived from a correntropy based cost function, which is rather robust with respect to impulsive noises. To deal with the noisy input, we introduce a bias-compensated vector (BCV) to the NMCC algorithm, and th...

  9. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  10. Fixation identification: the optimum threshold for a dispersion algorithm.

    Science.gov (United States)

    Blignaut, Pieter

    2009-05-01

    It is hypothesized that the number, position, size, and duration of fixations are functions of the metric used for dispersion in a dispersion-based fixation detection algorithm, as well as of the threshold value. The sensitivity of the I-DT algorithm for the various independent variables was determined through the analysis of gaze data from chess players during a memory recall experiment. A procedure was followed in which scan paths were generated at distinct intervals in a range of threshold values for each of five different metrics of dispersion. The percentage of points of regard (PORs) used, the number of fixations returned, the spatial dispersion of PORs within fixations, and the difference between the scan paths were used as indicators to determine an optimum threshold value. It was found that a fixation radius of 1 degrees provides a threshold that will ensure replicable results in terms of the number and position of fixations while utilizing about 90% of the gaze data captured.

  11. ALGORITHMS FOR IDENTIFICATION OF CUES WITH AUTHORS’ TEXT INSERTIONS IN BELARUSIAN ELECTRONIC BOOKS

    Directory of Open Access Journals (Sweden)

    Y. S. Hetsevich

    2014-01-01

    Full Text Available The main stages of algorithms for characters’ gender identification in Belarusian electronic texts are described. The algorithms are based on punctuation marking and gender indicators detection, such as past tense verbs and nouns with gender attributes. For indicators, special dictionaries are developed, thus making the algorithms more language-independent and allowing to create dictionaries for cognate languages. Testing showed the following results: the mean harmonic quantity for masculine gender detection makes up 92,2 %, and for feminine gender detection – 90,4%.

  12. E-Waste recycling: new algorithm for hyper spectral identification

    International Nuclear Information System (INIS)

    Picon-Ruiz, A.; Echazarra-Higuet, J.; Bereciartua-Perez, A.

    2010-01-01

    Waste electrical and Electronic Equipment (WEEE) constitutes 4% of the municipal waste in Europe, being increased by 16-28% every five years. Nowadays, Europe produces 6,5 million tonnes of WEEE per year and currently 90% goes to landfill. WEEE waste is growing 3 times faster than municipal waste and this figure is expected to be increased up to 12 million tones by 2015. Applying a new technology to separate non-ferrous metal Waste from WEEE is the aim of this paper, by identifying multi-and hyper-spectral materials and inserting them in a recycling plant. This technology will overcome the shortcomings passed by current methods, which are unable to separate valuable materials very similar in colour, size or shape. For this reason, it is necessary to develop new algorithms able to distinguish among these materials and to face the timing requirements. (Author). 22 refs.

  13. Cloud identification using genetic algorithms and massively parallel computation

    Science.gov (United States)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  14. Particle identification algorithms for the HARP forward spectrometer

    CERN Document Server

    Catanesi, M G; Radicioni, E; Edgecock, R; Ellis, M; Robbins, S; Soler, F J P; Go Xling, C; Bunyatov, S; Chelkov, G; Chukanov, A; Dedovitch, D; Gostkin, M; Guskov, A; Khartchenko, D; Klimov, O; Krasnoperov, A; Krumshtein, Z; Kustov, D; Nefedov, Y; Popov, B; Serdiouk, V; Tereshchenko, V; Zhemchugov, A; Di Capua, E; Vidal-Sitjes, G; Artamonov, A; Arce, P; Giani, S; Gilardoni, S; Gorbunov, P; Grant, A; Grossheim, A; Gruber, P; Ivanchenko, V; Kayis-Topaksu, A; Panman, J; Papadopoulos, I; Pasternak, J; Chernyaev, E; Tsukerman, I; Veenhof, R; Wiebusch, C; Zucchelli, P; Blondel, A; Borghi, S; Campanelli, M; Cervera-Villanueva, A; Morone, M C; Prior, G; Schroeter, R; Kato, I; Nakaya, T; Nishikawa, K; Ueda, S; Gastaldi, Ugo; Mills, G B; Graulich, J S; Grégoire, G; Bonesini, M; De Min, A; Ferri, F; Paganoni, M; Paleari, F; Kirsanov, M; Bagulya, A; Grichine, V; Polukhina, N; Palladino, V; Coney, L; Schmitz, D; Barr, G; De Santo, A; Pattison, C; Zuber, K; Bobisut, F; Gibin, D; Guglielmi, A; Laveder, M; Menegolli, A; Mezzetto, M; Dumarchez, J; Vannucci, F; Ammosov, V; Koreshev, V; Semak, A; Zaets, V; Dore, U; Orestano, D; Pastore, F; Tonazzo, A; Tortora, L; Booth, C; Buttar, C; Hodgson, P; Howlett, L; Bogomilov, M; Chizhov, M; Kolev, D; Tsenov, R; Piperov, S; Temnikov, P; Apollonio, M; Chimenti, P; Giannini, G; Santin, G; Hayato, Y; Ichikawa, A; Kobayashi, T; Burguet-Castell, J; Gómez-Cadenas, J J; Novella, P; Sorel, M; Tornero, A

    2007-01-01

    The particle identification (PID) methods used for the calculation of secondary pion yields with the HARP forward spectrometer are presented. Information from time of flight and Cherenkov detectors is combined using likelihood techniques. The efficiencies and purities associated with the different PID selection criteria are obtained from the data. For the proton–aluminium interactions at 12.9 GeV/c incident momentum, the PID efficiencies for positive pions are 86% in the momentum range below 2 GeV/c, 92% between 2 and 3 GeV/c and 98% in the momentum range above 3 GeV/c. The purity of the selection is better than 92% for all momenta. Special emphasis has been put on understanding the main error sources. The final PID uncertainty on the pion yield is 3.3%.

  15. Methodical algorithm of harmonized identification of plant varieties morphological characteristics

    Directory of Open Access Journals (Sweden)

    Н. В. Павлюк

    2013-08-01

    Full Text Available The article performs the requirements to the preparation and Table characteristics content of the holding Test guidelines examination of new varieties to identify the difference, uniformity and stability. It is pointed that the identification of varieties is carried by the morphological characteristics description as presented in the table. Being standard these characteristics comply with ones requirements. An characteristics explanation with an asterisk (* is given, their importance for the international harmonization of variety descriptions by States UPOV Members is indicated. The article presents an explanation of the grouping characteristics with documented revealing conditions. As it is commonly determined any characteristic begins with the plants or plants parts identification, which after a colon is followed by the body or its part nomination or by observed peculiarity. The requirements to the formulation of characteristics nomination are performed. It should be clear enough for understanding and without conditions determining. The degree of every characteristic nomination manifestation for its defining and making harmonizing descriptions is set. The corresponding numeric code, which facilitates the entry of data, drafting descriptions and their exchanging is given to each revealing degree. The morphological code configuration of variety phenotype is formed by the codes. It is emphasized that the features are divided into qualitative, quantitative and pseudo qualitative. The presentation order of the features in the table corresponds to the botanical or chronological order. It is noted that the Test guidelines should contain all the features suitable for holding examination at BOC and that there should not be any restrictions on their being included into Methods. Every feature can be used from a complete features list.

  16. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    Energy Technology Data Exchange (ETDEWEB)

    Paff, Marc Gerrit, E-mail: mpaff@umich.edu; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-21

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  17. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    Science.gov (United States)

    Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-01

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  18. Statistical algorithms for identification of astronomical X-ray sources

    Science.gov (United States)

    Ziaeepour, H.; Rosen, S.

    2008-01-01

    Observations of present and future X-ray telescopes include a large number of ipitous sources of unknown types. They are a rich source of knowledge about X-ray dominated astronomical objects, their distribution, and their evolution. The large number of these sources does not permit their individual spectroscopical follow-up and classification. Here we use Chandra Multi-Wavelength public data to investigate a number of statistical algorithms for classification of X-ray sources with optical imaging follow-up. We show that up to statistical uncertainties, each class of X-ray sources has specific photometric characteristics that can be used for its classification. We assess the relative and absolute performance of classification methods and measured features by comparing the behaviour of physical quantities for statistically classified objects with what is obtained from spectroscopy. We find that among methods we have studied, multi-dimensional probability distribution is the best for both classifying source type and redshift, but it needs a sufficiently large input (learning) data set. In absence of such data, a mixture of various methods can give a better final result. We discuss some of potential applications of the statistical classification and the enhancement of information obtained in this way. We also assess the effect of classification methods and input data set on the astronomical conclusions such as distribution and properties of X-ray selected sources.

  19. Structural System Identification in the Time Domain using Evolutionary and Behaviorally Inspired Algorithms and their Hybrids

    Directory of Open Access Journals (Sweden)

    S. Sandesh

    2009-12-01

    Full Text Available In this study, parametric identification of structural properties such as stiffness and damping is carried out using acceleration responses in the time domain. The process consists of minimizing the difference between the experimentally measured and theoretically predicted acceleration responses. The unknown parameters of certain numerical models, viz., a ten degree of freedom lumped mass system, a nine member truss and a non-uniform simply supported beam are thus identified. Evolutionary and behaviorally inspired optimization algorithms are used for minimization operations. The performance of their hybrid combinations is also investigated. Genetic Algorithm (GA is a well known evolutionary algorithm used in system identification. Recently Particle Swarm Optimization (PSO, a behaviorally inspired algorithm, has emerged as a strong contender to GA in speed and accuracy. The discrete Ant Colony Optimization (ACO method is yet another behaviorally inspired method studied here. The performance (speed and accuracy of each algorithm alone and in their hybrid combinations such as GA with PSO, ACO with PSO and ACO with GA are extensively investigated using the numerical examples with effects of noise added for realism. The GA+PSO hybrid algorithm was found to give the best performance in speed and accuracy compared to all others. The next best in performance was pure PSO followed by pure GA. ACO performed poorly in all the cases.

  20. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    Science.gov (United States)

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. A Soft Parameter Function Penalized Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Yingsong Li

    2017-01-01

    Full Text Available A soft parameter function penalized normalized maximum correntropy criterion (SPF-NMCC algorithm is proposed for sparse system identification. The proposed SPF-NMCC algorithm is derived on the basis of the normalized adaptive filter theory, the maximum correntropy criterion (MCC algorithm and zero-attracting techniques. A soft parameter function is incorporated into the cost function of the traditional normalized MCC (NMCC algorithm to exploit the sparsity properties of the sparse signals. The proposed SPF-NMCC algorithm is mathematically derived in detail. As a result, the proposed SPF-NMCC algorithm can provide an efficient zero attractor term to effectively attract the zero taps and near-zero coefficients to zero, and, hence, it can speed up the convergence. Furthermore, the estimation behaviors are obtained by estimating a sparse system and a sparse acoustic echo channel. Computer simulation results indicate that the proposed SPF-NMCC algorithm can achieve a better performance in comparison with the MCC, NMCC, LMS (least mean square algorithms and their zero attraction forms in terms of both convergence speed and steady-state performance.

  2. Comparison of Clustering Algorithms for the Identification of Topics on Twitter

    Directory of Open Access Journals (Sweden)

    Marjori N. M. Klinczak

    2016-05-01

    Full Text Available Topic Identification in Social Networks has become an important task when dealing with event detection, particularly when global communities are affected. In order to attack this problem, text processing techniques and machine learning algorithms have been extensively used. In this paper we compare four clustering algorithms – k-means, k-medoids, DBSCAN and NMF (Non-negative Matrix Factorization – in order to detect topics related to textual messages obtained from Twitter. The algorithms were applied to a database initially composed by tweets having hashtags related to the recent Nepal earthquake as initial context. Obtained results suggest that the NMF clustering algorithm presents superior results, providing simpler clusters that are also easier to interpret.

  3. Parameter identification based on modified simulated annealing differential evolution algorithm for giant magnetostrictive actuator

    Science.gov (United States)

    Gao, Xiaohui; Liu, Yongguang

    2018-01-01

    There is a serious nonlinear relationship between input and output in the giant magnetostrictive actuator (GMA) and how to establish mathematical model and identify its parameters is very important to study characteristics and improve control accuracy. The current-displacement model is firstly built based on Jiles-Atherton (J-A) model theory, Ampere loop theorem and stress-magnetism coupling model. And then laws between unknown parameters and hysteresis loops are studied to determine the data-taking scope. The modified simulated annealing differential evolution algorithm (MSADEA) is proposed by taking full advantage of differential evolution algorithm's fast convergence and simulated annealing algorithm's jumping property to enhance the convergence speed and performance. Simulation and experiment results shows that this algorithm is not only simple and efficient, but also has fast convergence speed and high identification accuracy.

  4. A Brightness-Referenced Star Identification Algorithm for APS Star Trackers

    Science.gov (United States)

    Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning

    2014-01-01

    Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4∼5 times that of the pyramid method and 35∼37 times that of the geometric method. PMID:25299950

  5. Response-only modal identification using random decrement algorithm with time-varying threshold level

    International Nuclear Information System (INIS)

    Lin, Chang Sheng; Tseng, Tse Chuan

    2014-01-01

    Modal Identification from response data only is studied for structural systems under nonstationary ambient vibration. The topic of this paper is the estimation of modal parameters from nonstationary ambient vibration data by applying the random decrement algorithm with time-varying threshold level. In the conventional random decrement algorithm, the threshold level for evaluating random dec signatures is defined as the standard deviation value of response data of the reference channel. The distortion of random dec signatures may be, however, induced by the error involved in noise from the original response data in practice. To improve the accuracy of identification, a modification of the sampling procedure in random decrement algorithm is proposed for modal-parameter identification from the nonstationary ambient response data. The time-varying threshold level is presented for the acquisition of available sample time history to perform averaging analysis, and defined as the temporal root-mean-square function of structural response, which can appropriately describe a wide variety of nonstationary behaviors in reality, such as the time-varying amplitude (variance) of a nonstationary process in a seismic record. Numerical simulations confirm the validity and robustness of the proposed modal-identification method from nonstationary ambient response data under noisy conditions.

  6. A NEW ALGORITHM FOR RADIOISOTOPE IDENTIFICATION OF SHIELDED AND MASKED SNM/RDD MATERIALS

    Energy Technology Data Exchange (ETDEWEB)

    Jeffcoat, R.

    2012-06-05

    Detection and identification of shielded and masked nuclear materials is crucial to national security, but vast borders and high volumes of traffic impose stringent requirements for practical detection systems. Such tools must be be mobile, and hence low power, provide a low false alarm rate, and be sufficiently robust to be operable by non-technical personnel. Currently fielded systems have not achieved all of these requirements simultaneously. Transport modeling such as that done in GADRAS is able to predict observed spectra to a high degree of fidelity; our research is focusing on a radionuclide identification algorithm that inverts this modeling within the constraints imposed by a handheld device. Key components of this work include incorporation of uncertainty as a function of both the background radiation estimate and the hypothesized sources, dimensionality reduction, and nonnegative matrix factorization. We have partially evaluated performance of our algorithm on a third-party data collection made with two different sodium iodide detection devices. Initial results indicate, with caveats, that our algorithm performs as good as or better than the on-board identification algorithms. The system developed was based on a probabilistic approach with an improved approach to variance modeling relative to past work. This system was chosen based on technical innovation and system performance over algorithms developed at two competing research institutions. One key outcome of this probabilistic approach was the development of an intuitive measure of confidence which was indeed useful enough that a classification algorithm was developed based around alarming on high confidence targets. This paper will present and discuss results of this novel approach to accurately identifying shielded or masked radioisotopes with radiation detection systems.

  7. Structural Damage Identification of Pipe Based on GA and SCE-UA Algorithm

    Directory of Open Access Journals (Sweden)

    Yaojin Bao

    2013-01-01

    Full Text Available Structure of offshore platform is very huge, which is easy to be with crack caused by a variety of environmental factors including winds, waves, and ice and threatened by some unexpected factors such as earthquake, typhoon, tsunami, and ship collision. Thus, as a main part of the jacket offshore platform, pipe is often with crack. However, it is difficult to detect the crack due to its unknown location. Genetic algorithm (GA and SCE-UA algorithm are used to detect crack in this paper, respectively. In the experiment, five damages of the pipe in the platform model can be intelligently identified by genetic algorithm (GA and SCE-UA. The network inputs are the differences between the strain mode shapes. The results of the two algorithms for structural damage diagnosis show that both of the two algorithms have high identification accuracy and good adaptability. Furthermore, the error of SCE-UA algorithm is smaller. The results also suggest that the structural damage of pipe can be identified by intelligent algorithm.

  8. Online Identification of Photovoltaic Source Parameters by Using a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Giovanni Petrone

    2017-12-01

    Full Text Available In this paper, an efficient method for the online identification of the photovoltaic single-diode model parameters is proposed. The combination of a genetic algorithm with explicit equations allows obtaining precise results without the direct measurement of short circuit current and open circuit voltage that is typically used in offline identification methods. Since the proposed method requires only voltage and current values close to the maximum power point, it can be easily integrated into any photovoltaic system, and it operates online without compromising the power production. The proposed approach has been implemented and tested on an embedded system, and it exhibits a good performance for monitoring/diagnosis applications.

  9. Homotopy Iteration Algorithm for Crack Parameters Identification with Composite Element Method

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2013-01-01

    Full Text Available An approach based on homotopy iteration algorithm is proposed to identify the crack parameters in beam structures. In the forward problem, a fully open crack model with the composite element method is employed for the vibration analysis. The dynamic responses of the cracked beam in time domain are obtained from the Newmark direct integration method. In the inverse analysis, an identification approach based on homotopy iteration algorithm is studied to identify the location and the depth of a cracked beam. The identification equation is derived by minimizing the error between the calculated acceleration response and the simulated measured one. Newton iterative method with the homotopy equation is employed to track the correct path and improve the convergence of the crack parameters. Two numerical examples are conducted to illustrate the correctness and efficiency of the proposed method. And the effects of the influencing parameters, such as measurement time duration, measurement points, division of the homotopy parameter and measurement noise, are studied.

  10. Application of decision tree algorithm for identification of rock forming minerals using energy dispersive spectrometry

    Science.gov (United States)

    Akkaş, Efe; Çubukçu, H. Evren; Artuner, Harun

    2014-05-01

    C5.0 Decision Tree algorithm. The predictions of the decision tree classifier, namely the matching of the test data with the appropriate mineral group, yield an overall accuracy of >90%. Besides, the algorithm successfully discriminated some mineral (groups) despite their similar elemental composition such as orthopyroxene ((Mg,Fe)2[SiO6]) and olivine ((Mg,Fe)2[SiO4]). Furthermore, the effects of various operating conditions have been insignificant for the classifier. These results demonstrate that decision tree algorithm stands as an accurate, rapid and automated method for mineral classification/identification. Hence, decision tree algorithm would be a promising component of an expert system focused on real-time, automated mineral identification using energy dispersive spectrometers without being affected from the operating conditions. Keywords: mineral identification, energy dispersive spectrometry, decision tree algorithm.

  11. Italian Physical Society Novel Muon Identification Algorithms for the LHCb upgrade

    CERN Document Server

    Cogoni, V

    2017-01-01

    After the second Long Shutdown of the LHC scheduled for 2020, LHCb will operate at an istantaneous luminosity of $2·10^{33} cm^{−2}s^{−1}$ and at a centre of mass energy of 14 TeV. In this context, an overview of the possible new algorithms for the muon identification for the LHCb Upgrade is illustrated here. In particular, the performance on combinatorial background rejection is shown, together with the extrapolations to upgrade conditions.

  12. Online identification algorithms for integrated dielectric electroactive polymer sensors and self-sensing concepts

    International Nuclear Information System (INIS)

    Hoffstadt, Thorben; Griese, Martin; Maas, Jürgen

    2014-01-01

    Transducers based on dielectric electroactive polymers (DEAP) use electrostatic pressure to convert electric energy into strain energy or vice versa. Besides this, they are also designed for sensor applications in monitoring the actual stretch state on the basis of the deformation dependent capacitive–resistive behavior of the DEAP. In order to enable an efficient and proper closed loop control operation of these transducers, e.g. in positioning or energy harvesting applications, on the one hand, sensors based on DEAP material can be integrated into the transducers and evaluated externally, and on the other hand, the transducer itself can be used as a sensor, also in terms of self-sensing. For this purpose the characteristic electrical behavior of the transducer has to be evaluated in order to determine the mechanical state. Also, adequate online identification algorithms with sufficient accuracy and dynamics are required, independent from the sensor concept utilized, in order to determine the electrical DEAP parameters in real time. Therefore, in this contribution, algorithms are developed in the frequency domain for identifications of the capacitance as well as the electrode and polymer resistance of a DEAP, which are validated by measurements. These algorithms are designed for self-sensing applications, especially if the power electronics utilized is operated at a constant switching frequency, and parasitic harmonic oscillations are induced besides the desired DC value. These oscillations can be used for the online identification, so an additional superimposed excitation is no longer necessary. For this purpose a dual active bridge (DAB) is introduced to drive the DEAP transducer. The capabilities of the real-time identification algorithm in combination with the DAB are presented in detail and discussed, finally. (paper)

  13. Load power device and system for real-time execution of hierarchical load identification algorithms

    Science.gov (United States)

    Yang, Yi; Madane, Mayura Arun; Zambare, Prachi Suresh

    2017-11-14

    A load power device includes a power input; at least one power output for at least one load; and a plurality of sensors structured to sense voltage and current at the at least one power output. A processor is structured to provide real-time execution of: (a) a plurality of load identification algorithms, and (b) event detection and operating mode detection for the at least one load.

  14. An integer optimization algorithm for robust identification of non-linear gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Chemmangattuvalappil Nishanth

    2012-09-01

    Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters

  15. Tag Anti-collision Algorithm for RFID Systems with Minimum Overhead Information in the Identification Process

    Directory of Open Access Journals (Sweden)

    Usama S. Mohammed

    2011-04-01

    Full Text Available This paper describes a new tree based anti-collision algorithm for Radio Frequency Identification (RFID systems. The proposed technique is based on fast parallel binary splitting (FPBS technique. It follows a new identification path through the binary tree. The main advantage of the proposed protocol is the simple dialog between the reader and tags. It needs only one bit tag response followed by one bit reader reply (one-to-one bit dialog. The one bit reader response represents the collision report (0: collision; 1: no collision of the tags' one bit message. The tag achieves self transmission control by dynamically updating its relative replying order due to the received collision report. The proposed algorithm minimizes the overhead transmitted bits per one tag identification. In the collision state, tags do modify their next replying order in the next bit level. Performed computer simulations have shown that the collision recovery scheme is very fast and simple even with the successive reading process. Moreover, the proposed algorithm outperforms most of the recent techniques in most cases.

  16. RSPOP: rough set-based pseudo outer-product fuzzy rule identification algorithm.

    Science.gov (United States)

    Ang, Kai Keng; Quek, Chai

    2005-01-01

    System modeling with neuro-fuzzy systems involves two contradictory requirements: interpretability verses accuracy. The pseudo outer-product (POP) rule identification algorithm used in the family of pseudo outer-product-based fuzzy neural networks (POPFNN) suffered from an exponential increase in the number of identified fuzzy rules and computational complexity arising from high-dimensional data. This decreases the interpretability of the POPFNN in linguistic fuzzy modeling. This article proposes a novel rough set-based pseudo outer-product (RSPOP) algorithm that integrates the sound concept of knowledge reduction from rough set theory with the POP algorithm. The proposed algorithm not only performs feature selection through the reduction of attributes but also extends the reduction to rules without redundant attributes. As many possible reducts exist in a given rule set, an objective measure is developed for POPFNN to correctly identify the reducts that improve the inferred consequence. Experimental results are presented using published data sets and real-world application involving highway traffic flow prediction to evaluate the effectiveness of using the proposed algorithm to identify fuzzy rules in the POPFNN using compositional rule of inference and singleton fuzzifier (POPFNN-CRI(S)) architecture. Results showed that the proposed rough set-based pseudo outer-product algorithm reduces computational complexity, improves the interpretability of neuro-fuzzy systems by identifying significantly fewer fuzzy rules, and improves the accuracy of the POPFNN.

  17. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  18. Improved identification of clouds and ice/snow covered surfaces in SCIAMACHY observations

    Directory of Open Access Journals (Sweden)

    J. M. Krijger

    2011-10-01

    Full Text Available In the ultra-violet, visible and near infra-red wavelength range the presence of clouds can strongly affect the satellite-based passive remote sensing observation of constituents in the troposphere, because clouds effectively shield the lower part of the atmosphere. Therefore, cloud detection algorithms are of crucial importance in satellite remote sensing. However, the detection of clouds over snow/ice surfaces is particularly difficult in the visible wavelengths as both clouds an snow/ice are both white and highly reflective. The SCIAMACHY Polarisation Measurement Devices (PMD Identification of Clouds and Ice/snow method (SPICI uses the SCIAMACHY measurements in the wavelength range between 450 nm and 1.6 μm to make a distinction between clouds and ice/snow covered surfaces, specifically developed to identify cloud-free SCIAMACHY observations. For this purpose the on-board SCIAMACHY PMDs are used because they provide higher spatial resolution compared to the main spectrometer measurements. In this paper we expand on the original SPICI algorithm (Krijger et al., 2005a to also adequately detect clouds over snow-covered forests which is inherently difficult because of the similar spectral characteristics. Furthermore the SCIAMACHY measurements suffer from degradation with time. This must be corrected for adequate performance of SPICI over the full SCIAMACHY time range. Such a correction is described here. Finally the performance of the new SPICI algorithm is compared with various other datasets, such as from FRESCO, MICROS and AATSR, focusing on the algorithm improvements.

  19. Identification for Active Vibration Control of Flexible Structure Based on Prony Algorithm

    Directory of Open Access Journals (Sweden)

    Xianjun Sheng

    2016-01-01

    Full Text Available Flexible structures have been widely used in many fields due to the advantages of light quality, small damping, and strong flexibility. However, flexible structures exhibit the vibration in the process of manipulation, which reduces the pointing precision of the system and causes fatigue of the machine. So, this paper focuses on the identification method for active vibration control of flexible structure. The modal parameters and transfer function of the system are identified from the step response signal based on Prony algorithm, while the vibration is attenuated by using the input shaping technique designed according to the parameters identified from the Prony algorithm. Eventually, the proposed approach is applied to the most common flexible structure, a piezoelectric cantilever beam actuated by Macro Fiber Composite (MFC. The experimental results demonstrate that the Prony algorithm is very effective and accurate on the dynamic modeling of flexible structure and input shaper could significantly reduce the vibration and improve the response speed of system.

  20. Parameters identification for photovoltaic module based on an improved artificial fish swarm algorithm.

    Science.gov (United States)

    Han, Wei; Wang, Hong-Hua; Chen, Ling

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision.

  1. Parameter Identification of the 2-Chlorophenol Oxidation Model Using Improved Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Guang-zhou Chen

    2015-01-01

    Full Text Available Parameter identification plays a crucial role for simulating and using model. This paper firstly carried out the sensitivity analysis of the 2-chlorophenol oxidation model in supercritical water using the Monte Carlo method. Then, to address the nonlinearity of the model, two improved differential search (DS algorithms were proposed to carry out the parameter identification of the model. One strategy is to adopt the Latin hypercube sampling method to replace the uniform distribution of initial population; the other is to combine DS with simplex method. The results of sensitivity analysis reveal the sensitivity and the degree of difficulty identified for every model parameter. Furthermore, the posteriori probability distribution of parameters and the collaborative relationship between any two parameters can be obtained. To verify the effectiveness of the improved algorithms, the optimization performance of improved DS in kinetic parameter estimation is studied and compared with that of the basic DS algorithm, differential evolution, artificial bee colony optimization, and quantum-behaved particle swarm optimization. And the experimental results demonstrate that the DS with the Latin hypercube sampling method does not present better performance, while the hybrid methods have the advantages of strong global search ability and local search ability and are more effective than the other algorithms.

  2. Genetic algorithms approach to the problem of the automated vehicle identification equipment location

    Energy Technology Data Exchange (ETDEWEB)

    Teodorovic, D.; Van Aerde, M.; Zhu, F.; Dion, F. [Virginia Polytechnic Instutute and State University, Dept. of Civil and Environmental Engineering, Blacksburg, VA (United States)

    2002-12-31

    Automated Vehicle Identification technology allows vehicles equipped with special tags to be detected at specific points in the transportation network without any action by the driver as they pass under a reading station. Benefits of the systems are found in the real-time measurement of traffic patterns, traffic operations and control, reduction of traffic congestion at transportation facilities, transportation planning studies, information and control, electronic toll collection, vehicle identification and other related functions. The objective of this paper is to develop a heuristic model for the optimal location of automated vehicle identification equipment using generic algorithms. A model is proposed and it is tested for the case of a relatively small hypothetical transportation network. Testing the model showed promising results. As the subject of future research other metaheuristic approaches such as simulated annealing and taboo searching have been identified as most important directions. 4 refs., 1 tab., 11 figs.

  3. Integration of Architectural and Cytologic Driven Image Algorithms for Prostate Adenocarcinoma Identification

    Science.gov (United States)

    Hipp, Jason; Monaco, James; Kunju, L. Priya; Cheng, Jerome; Yagi, Yukako; Rodriguez-Canales, Jaime; Emmert-Buck, Michael R.; Hewitt, Stephen; Feldman, Michael D.; Tomaszewski, John E.; Toner, Mehmet; Tompkins, Ronald G.; Flotte, Thomas; Lucas, David; Gilbertson, John R.; Madabhushi, Anant; Balis, Ulysses

    2012-01-01

    Introduction: The advent of digital slides offers new opportunities within the practice of pathology such as the use of image analysis techniques to facilitate computer aided diagnosis (CAD) solutions. Use of CAD holds promise to enable new levels of decision support and allow for additional layers of quality assurance and consistency in rendered diagnoses. However, the development and testing of prostate cancer CAD solutions requires a ground truth map of the cancer to enable the generation of receiver operator characteristic (ROC) curves. This requires a pathologist to annotate, or paint, each of the malignant glands in prostate cancer with an image editor software - a time consuming and exhaustive process. Recently, two CAD algorithms have been described: probabilistic pairwise Markov models (PPMM) and spatially-invariant vector quantization (SIVQ). Briefly, SIVQ operates as a highly sensitive and specific pattern matching algorithm, making it optimal for the identification of any epithelial morphology, whereas PPMM operates as a highly sensitive detector of malignant perturbations in glandular lumenal architecture. Methods: By recapitulating algorithmically how a pathologist reviews prostate tissue sections, we created an algorithmic cascade of PPMM and SIVQ algorithms as previously described by Doyle el al. [1] where PPMM identifies the glands with abnormal lumenal architecture, and this area is then screened by SIVQ to identify the epithelium. Results: The performance of this algorithm cascade was assessed qualitatively (with the use of heatmaps) and quantitatively (with the use of ROC curves) and demonstrates greater performance in the identification of malignant prostatic epithelium. Conclusion: This ability to semi-autonomously paint nearly all the malignant epithelium of prostate cancer has immediate applications to future prostate cancer CAD development as a validated ground truth generator. In addition, such an approach has potential applications as a

  4. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new

  5. Automatic vertebral identification using surface-based registration

    Science.gov (United States)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  6. Identification of boundary heat flux on the continuous casting surface

    Directory of Open Access Journals (Sweden)

    E. Majchrzak

    2008-12-01

    Full Text Available In the paper the numerical solution of the inverse problem consisting in the identification of the heat flux on the continuous casting surface is presented. The additional information results from the measured surface or interior temperature histories. In particular the sequential function specification method using future time steps is applied. On the stage of numerical computations the 1st scheme of the boundary element method for parabolic equations is used. Because the problem is strongly non-linear the additional procedure 'linearizing' the task discussed is introduced. This procedure is called the artificial heat source method. In the final part of the paper the examples of computations are shown.

  7. Surface roughness optimization in machining of AZ31 magnesium alloy using ABC algorithm

    Directory of Open Access Journals (Sweden)

    Abhijith

    2018-01-01

    Full Text Available Magnesium alloys serve as excellent substitutes for materials traditionally used for engine block heads in automobiles and gear housings in aircraft industries. AZ31 is a magnesium alloy finds its applications in orthopedic implants and cardiovascular stents. Surface roughness is an important parameter in the present manufacturing sector. In this work optimization techniques namely firefly algorithm (FA, particle swarm optimization (PSO and artificial bee colony algorithm (ABC which are based on swarm intelligence techniques, have been implemented to optimize the machining parameters namely cutting speed, feed rate and depth of cut in order to achieve minimum surface roughness. The parameter Ra has been considered for evaluating the surface roughness. Comparing the performance of ABC algorithm with FA and PSO algorithm, which is a widely used optimization algorithm in machining studies, the results conclude that ABC produces better optimization when compared to FA and PSO for optimizing surface roughness of AZ 31.

  8. Identification of current-carrying part of a random resistor network: electrical approaches vs. graph theory algorithms

    Science.gov (United States)

    Tarasevich, Yu Yu; Burmistrov, A. S.; Goltseva, V. A.; Gordeev, I. I.; Serbin, V. I.; Sizova, A. A.; Vodolazskaya, I. V.; Zholobov, D. A.

    2018-01-01

    A set of current-carrying bonds of a random resistor network (RRN) is called the (effective) backbone. The (geometrical) backbone can be defined as a union of all self-avoiding walks between two given points on a network or between its opposite borders. These two definitions provide two different approaches for identification of backbones. On the one hand, one can treat an arbitrary network as RRN and calculate potentials and currents in this RRN. On the other hand, one can apply to the network some search algorithms on graphs. Each of these approaches are known to have both advantages and drawbacks. We have implemented several different algorithms for backbone identification. The algorithms were applied to backbone identification for different system sizes and concentrations of conducting bonds. Our analysis suggests that a universal algorithm suitable for any problem is hardly possible to offer. Most likely, each particular task needs a specific algorithm.

  9. A scalable algorithm for structure identification of complex gene regulatory network from temporal expression data.

    Science.gov (United States)

    Gui, Shupeng; Rice, Andrew P; Chen, Rui; Wu, Liang; Liu, Ji; Miao, Hongyu

    2017-01-31

    Gene regulatory interactions are of fundamental importance to various biological functions and processes. However, only a few previous computational studies have claimed success in revealing genome-wide regulatory landscapes from temporal gene expression data, especially for complex eukaryotes like human. Moreover, recent work suggests that these methods still suffer from the curse of dimensionality if a network size increases to 100 or higher. Here we present a novel scalable algorithm for identifying genome-wide gene regulatory network (GRN) structures, and we have verified the algorithm performances by extensive simulation studies based on the DREAM challenge benchmark data. The highlight of our method is that its superior performance does not degenerate even for a network size on the order of 10 4 , and is thus readily applicable to large-scale complex networks. Such a breakthrough is achieved by considering both prior biological knowledge and multiple topological properties (i.e., sparsity and hub gene structure) of complex networks in the regularized formulation. We also validate and illustrate the application of our algorithm in practice using the time-course gene expression data from a study on human respiratory epithelial cells in response to influenza A virus (IAV) infection, as well as the CHIP-seq data from ENCODE on transcription factor (TF) and target gene interactions. An interesting finding, owing to the proposed algorithm, is that the biggest hub structures (e.g., top ten) in the GRN all center at some transcription factors in the context of epithelial cell infection by IAV. The proposed algorithm is the first scalable method for large complex network structure identification. The GRN structure identified by our algorithm could reveal possible biological links and help researchers to choose which gene functions to investigate in a biological event. The algorithm described in this article is implemented in MATLAB Ⓡ , and the source code is freely

  10. Identification and detection of gaseous effluents from hyperspectral imagery using invariant algorithms

    Science.gov (United States)

    O'Donnell, Erin M.; Messinger, David W.; Salvaggio, Carl; Schott, John R.

    2004-08-01

    The ability to detect and identify effluent gases is, and will continue to be, of great importance. This would not only aid in the regulation of pollutants but also in treaty enforcement and monitoring the production of weapons. Considering these applications, finding a way to remotely investigate a gaseous emission is highly desirable. This research utilizes hyperspectral imagery in the infrared region of the electromagnetic spectrum to evaluate an invariant method of detecting and identifying gases within a scene. The image is evaluated on a pixel-by-pixel basis and is studied at the subpixel level. A library of target gas spectra is generated using a simple slab radiance model. This results in a more robust description of gas spectra which are representative of real-world observations. This library is the subspace utilized by the detection and identification algorithms. The subspace will be evaluated for the set of basis vectors that best span the subspace. The Lee algorithm will be used to determine the set of basis vectors, which implements the Maximum Distance Method (MaxD). A Generalized Likelihood Ratio Test (GLRT) determines whether or not the pixel contains the target. The target can be either a single species or a combination of gases. Synthetically generated scenes will be used for this research. This work evaluates whether the Lee invariant algorithm will be effective in the gas detection and identification problem.

  11. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms.

    Science.gov (United States)

    Liu, Rensong; Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan

    2017-01-01

    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K -nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.

  12. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms

    Directory of Open Access Journals (Sweden)

    Rensong Liu

    2017-01-01

    Full Text Available Motor imagery (MI electroencephalograph (EEG signals are widely applied in brain-computer interface (BCI. However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA combined with wavelet threshold denoising (WTD is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN and support vector machine (SVM approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.

  13. Assessment of diverse algorithms applied on MODIS Aqua and Terra data over land surfaces in Europe

    Science.gov (United States)

    Glantz, P.; Tesche, M.

    2012-04-01

    Beside an increase of greenhouse gases (e.g., carbon dioxide, methane and nitrous oxide) human activities (for instance fossil fuel and biomass burning) have lead to perturbation of the atmospheric content of aerosol particles. Aerosols exhibits high spatial and temporal variability in the atmosphere. Therefore, aerosol investigation for climate research and environmental control require the identification of source regions, their strength and aerosol type, which can be retrieved based on space-borne observations. The aim of the present study is to validate and evaluate AOT (aerosol optical thickness) and Ångström exponent, obtained with the SAER (Satellite AErosol Retrieval) algorithm for MODIS (MODerate resolution Imaging Spectroradiometer) Aqua and Terra calibrated level 1 data (1 km horizontal resolution at ground), against AERONET (AErosol RObotic NETwork) observations and MODIS Collection 5 (c005) standard product retrievals (10 km), respectively, over land surfaces in Europe for the seasons; early spring (period 1), mid spring (period 2) and summer (period 3). For several of the cases analyzed here the Aqua and Terra satellites passed the investigation area twice during a day. Thus, beside a variation in the sun elevation the satellite aerosol retrievals have also on a daily basis been performed with a significant variation in the satellite-viewing geometry. An inter-comparison of the two algorithms has also been performed. The validation with AERONET shows that the MODIS c005 retrieved AOT is, for the wavelengths 0.469 and 0.500 nm, on the whole within the expected uncertainty for one standard deviation of the MODIS retrievals over Europe (Δτ = ±0.05 ± 0.15τ). The SAER estimated AOT for the wavelength 0.443 nm also agree reasonable well with AERONET. Thus, the majority of the SAER AOT values are within the MODIS expected uncertainty range, although somewhat larger RMSD (root mean square deviation) occurs compared to the results obtained with the

  14. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  15. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    Science.gov (United States)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  16. Parameter identification based on modified simulated annealing differential evolution algorithm for giant magnetostrictive actuator

    Directory of Open Access Journals (Sweden)

    Xiaohui Gao

    2018-01-01

    Full Text Available There is a serious nonlinear relationship between input and output in the giant magnetostrictive actuator (GMA and how to establish mathematical model and identify its parameters is very important to study characteristics and improve control accuracy. The current-displacement model is firstly built based on Jiles-Atherton (J-A model theory, Ampere loop theorem and stress-magnetism coupling model. And then laws between unknown parameters and hysteresis loops are studied to determine the data-taking scope. The modified simulated annealing differential evolution algorithm (MSADEA is proposed by taking full advantage of differential evolution algorithm’s fast convergence and simulated annealing algorithm’s jumping property to enhance the convergence speed and performance. Simulation and experiment results shows that this algorithm is not only simple and efficient, but also has fast convergence speed and high identification accuracy.

  17. An algorithm and program for finding sequence specific oligo-nucleotide probes for species identification

    Directory of Open Access Journals (Sweden)

    Tautz Diethard

    2002-03-01

    Full Text Available Abstract Background The identification of species or species groups with specific oligo-nucleotides as molecular signatures is becoming increasingly popular for bacterial samples. However, it shows also great promise for other small organisms that are taxonomically difficult to tract. Results We have devised here an algorithm that aims to find the optimal probes for any given set of sequences. The program requires only a crude alignment of these sequences as input and is optimized for performance to deal also with very large datasets. The algorithm is designed such that the position of mismatches in the probes influences the selection and makes provision of single nucleotide outloops. Program implementations are available for Linux and Windows.

  18. Enriched Imperialist Competitive Algorithm for system identification of magneto-rheological dampers

    Science.gov (United States)

    Talatahari, Siamak; Rahbari, Nima Mohajer

    2015-10-01

    In the current research, the imperialist competitive algorithm is dramatically enhanced and a new optimization method dubbed as Enriched Imperialist Competitive Algorithm (EICA) is effectively introduced to deal with high non-linear optimization problems. To conduct a close examination of its functionality and efficacy, the proposed metaheuristic optimization approach is actively employed to sort out the parameter identification of two different types of hysteretic Bouc-Wen models which are simulating the non-linear behavior of MR dampers. Two types of experimental data are used for the optimization problems to minutely examine the robustness of the proposed EICA. The obtained results self-evidently demonstrate the high adaptability of EICA to suitably get to the bottom of such non-linear and hysteretic problems.

  19. An eigensystem realization algorithm using data correlations (ERA/DC) for modal parameter identification

    Science.gov (United States)

    Juang, Jer-Nan; Cooper, J. E.; Wright, J. R.

    1987-01-01

    A modification to the Eigensystem Realization Algorithm (ERA) for modal parameter identification is presented in this paper. The ERA minimum order realization approach using singular value decomposition is combined with the philosophy of the Correlation Fit method in state space form such that response data correlations rather than actual response values are used for modal parameter identification. This new method, the ERA using data correlations (ERA/DC), reduces bias errors due to noise corruption significantly without the need for model overspecification. This method is tested using simulated five-degree-of-freedom system responses corrupted by measurement noise. It is found for this case that, when model overspecification is permitted and a minimum order solution obtained via singular value truncation, the results from the two methods are of similar quality.

  20. Estimating index of refraction for material identification in comparison to existing temperature emissivity separation algorithms

    Science.gov (United States)

    Martin, Jacob A.; Gross, Kevin C.

    2016-05-01

    As off-nadir viewing platforms become increasingly prevalent in remote sensing, material identification techniques must be robust to changing viewing geometries. Current identification strategies generally rely on estimating reflectivity or emissivity, both of which vary with viewing angle. Presented here is a technique, leveraging polarimetric and hyperspectral imaging (P-HSI), to estimate index of refraction which is invariant to viewing geometry. Results from a quartz window show that index of refraction can be retrieved to within 0.08 rms error from 875-1250 cm-1 for an amorphous material. Results from a silicon carbide (SiC) wafer, which has much sharper features than quartz glass, show the index of refraction can be retrieved to within 0.07 rms error. The results from each of these datasets show an improvement when compared with a maximum smoothness TES algorithm.

  1. A dynamic programming algorithm for identification of triplex-forming sequences.

    Science.gov (United States)

    Lexa, Matej; Martínek, Tomáš; Burgetová, Ivana; Kopeček, Daniel; Brázdová, Marie

    2011-09-15

    Current methods for identification of potential triplex-forming sequences in genomes and similar sequence sets rely primarily on detecting homopurine and homopyrimidine tracts. Procedures capable of detecting sequences supporting imperfect, but structurally feasible intramolecular triplex structures are needed for better sequence analysis. We modified an algorithm for detection of approximate palindromes, so as to account for the special nature of triplex DNA structures. From available literature, we conclude that approximate triplexes tolerate two classes of errors. One, analogical to mismatches in duplex DNA, involves nucleotides in triplets that do not readily form Hoogsteen bonds. The other class involves geometrically incompatible neighboring triplets hindering proper alignment of strands for optimal hydrogen bonding and stacking. We tested the statistical properties of the algorithm, as well as its correctness when confronted with known triplex sequences. The proposed algorithm satisfactorily detects sequences with intramolecular triplex-forming potential. Its complexity is directly comparable to palindrome searching. Our implementation of the algorithm is available at http://www.fi.muni.cz/lexa/triplex as source code and a web-based search tool. The source code compiles into a library providing searching capability to other programs, as well as into a stand-alone command-line application based on this library. lexa@fi.muni.cz Supplementary data are available at Bioinformatics online.

  2. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    Science.gov (United States)

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  3. Adaptive Algorithm For Identification Of The Environment Parameters In Contact Tasks

    International Nuclear Information System (INIS)

    Tuneski, Atanasko; Babunski, Darko

    2003-01-01

    An adaptive algorithm for identification of the unknown parameters of the dynamic environment in contact tasks is proposed in this paper using the augmented least square estimation method. An approximate environment digital simulator for the continuous environment dynamics is derived, i.e. a discrete transfer function which has the approximately the same characteristics as the continuous environment dynamics is found. For solving this task a method named hold equivalence is used. The general model of the environment dynamics is given and the case when the environment dynamics is represented by second order models with parameter uncertainties is considered. (Author)

  4. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  5. Mapping Surface Broadband Albedo from Satellite Observations: A Review of Literatures on Algorithms and Products

    Directory of Open Access Journals (Sweden)

    Ying Qu

    2015-01-01

    Full Text Available Surface albedo is one of the key controlling geophysical parameters in the surface energy budget studies, and its temporal and spatial variation is closely related to the global climate change and regional weather system due to the albedo feedback mechanism. As an efficient tool for monitoring the surfaces of the Earth, remote sensing is widely used for deriving long-term surface broadband albedo with various geostationary and polar-orbit satellite platforms in recent decades. Moreover, the algorithms for estimating surface broadband albedo from satellite observations, including narrow-to-broadband conversions, bidirectional reflectance distribution function (BRDF angular modeling, direct-estimation algorithm and the algorithms for estimating albedo from geostationary satellite data, are developed and improved. In this paper, we present a comprehensive literature review on algorithms and products for mapping surface broadband albedo with satellite observations and provide a discussion of different algorithms and products in a historical perspective based on citation analysis of the published literature. This paper shows that the observation technologies and accuracy requirement of applications are important, and long-term, global fully-covered (including land, ocean, and sea-ice surfaces, gap-free, surface broadband albedo products with higher spatial and temporal resolution are required for climate change, surface energy budget, and hydrological studies.

  6. A new surface fractal dimension for displacement mode shape-based damage identification of plate-type structures

    Science.gov (United States)

    Shi, Binkai; Qiao, Pizhong

    2018-03-01

    Vibration-based nondestructive testing is an area of growing interest and worthy of exploring new and innovative approaches. The displacement mode shape is often chosen to identify damage due to its local detailed characteristic and less sensitivity to surrounding noise. Requirement for baseline mode shape in most vibration-based damage identification limits application of such a strategy. In this study, a new surface fractal dimension called edge perimeter dimension (EPD) is formulated, from which an EPD-based window dimension locus (EPD-WDL) algorithm for irregularity or damage identification of plate-type structures is established. An analytical notch-type damage model of simply-supported plates is proposed to evaluate notch effect on plate vibration performance; while a sub-domain of notch cases with less effect is selected to investigate robustness of the proposed damage identification algorithm. Then, fundamental aspects of EPD-WDL algorithm in term of notch localization, notch quantification, and noise immunity are assessed. A mathematical solution called isomorphism is implemented to remove false peaks caused by inflexions of mode shapes when applying the EPD-WDL algorithm to higher mode shapes. The effectiveness and practicability of the EPD-WDL algorithm are demonstrated by an experimental procedure on damage identification of an artificially-induced notched aluminum cantilever plate using a measurement system of piezoelectric lead-zirconate (PZT) actuator and scanning laser Doppler vibrometer (SLDV). As demonstrated in both the analytical and experimental evaluations, the new surface fractal dimension technique developed is capable of effectively identifying damage in plate-type structures.

  7. From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth

    International Nuclear Information System (INIS)

    Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.

    2000-01-01

    We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

  8. A measurement fusion method for nonlinear system identification using a cooperative learning algorithm.

    Science.gov (United States)

    Xia, Youshen; Kamel, Mohamed S

    2007-06-01

    Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.

  9. Evaluation of sensor placement algorithms for on-orbit identification of space platforms

    Science.gov (United States)

    Glassburn, Robin S.; Smith, Suzanne Weaver

    1994-01-01

    Anticipating the construction of the international space station, on-orbit modal identification of space platforms through optimally placed accelerometers is an area of recent activity. Unwanted vibrations in the platform could affect the results of experiments which are planned. Therefore, it is important that sensors (accelerometers) be strategically placed to identify the amount and extent of these unwanted vibrations, and to validate the mathematical models used to predict the loads and dynamic response. Due to cost, installation, and data management issues, only a limited number of sensors will be available for placement. This work evaluates and compares four representative sensor placement algorithms for modal identification. Most of the sensor placement work to date has employed only numerical simulations for comparison. This work uses experimental data from a fully-instrumented truss structure which was one of a series of structures designed for research in dynamic scale model ground testing of large space structures at NASA Langley Research Center. Results from this comparison show that for this cantilevered structure, the algorithm based on Guyan reduction is rated slightly better than that based on Effective Independence.

  10. Multi-Scale Parameter Identification of Lithium-Ion Battery Electric Models Using a PSO-LM Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-Jing Shen

    2017-03-01

    Full Text Available This paper proposes a multi-scale parameter identification algorithm for the lithium-ion battery (LIB electric model by using a combination of particle swarm optimization (PSO and Levenberg-Marquardt (LM algorithms. Two-dimensional Poisson equations with unknown parameters are used to describe the potential and current density distribution (PDD of the positive and negative electrodes in the LIB electric model. The model parameters are difficult to determine in the simulation due to the nonlinear complexity of the model. In the proposed identification algorithm, PSO is used for the coarse-scale parameter identification and the LM algorithm is applied for the fine-scale parameter identification. The experiment results show that the multi-scale identification not only improves the convergence rate and effectively escapes from the stagnation of PSO, but also overcomes the local minimum entrapment drawback of the LM algorithm. The terminal voltage curves from the PDD model with the identified parameter values are in good agreement with those from the experiments at different discharge/charge rates.

  11. A Novel Algorithm for Validating Peptide Identification from a Shotgun Proteomics Search Engine

    Science.gov (United States)

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Zheng, Mu; Jennings, Jennifer L.; Hoek, Kristen L.; Allos, Tara; Howard., Leigh M.; Edwards, Kathryn M.; Weil, P. Anthony; Link, Andrew J.

    2013-01-01

    Liquid chromatography coupled with tandem mass spectrometry has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC/MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm based on the resolution and mass accuracy of the mass spectrometer employed in the LC/MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines. PMID:23402659

  12. GPR identification of voids inside concrete based on the support vector machine algorithm

    International Nuclear Information System (INIS)

    Xie, Xiongyao; Li, Pan; Qin, Hui; Liu, Lanbo; Nobes, David C

    2013-01-01

    Voids inside reinforced concrete, which affect structural safety, are identified from ground penetrating radar (GPR) images using a completely automatic method based on the support vector machine (SVM) algorithm. The entire process can be characterized into four steps: (1) the original SVM model is built by training synthetic GPR data generated by finite difference time domain simulation and after data preprocessing, segmentation and feature extraction. (2) The classification accuracy of different kernel functions is compared with the cross-validation method and the penalty factor (c) of the SVM and the coefficient (σ2) of kernel functions are optimized by using the grid algorithm and the genetic algorithm. (3) To test the success of classification, this model is then verified and validated by applying it to another set of synthetic GPR data. The result shows a high success rate for classification. (4) This original classifier model is finally applied to a set of real GPR data to identify and classify voids. The result is less than ideal when compared with its application to synthetic data before the original model is improved. In general, this study shows that the SVM exhibits promising performance in the GPR identification of voids inside reinforced concrete. Nevertheless, the recognition of shape and distribution of voids may need further improvement. (paper)

  13. Braking distance algorithm for autonomous cars using road surface recognition

    Science.gov (United States)

    Kavitha, C.; Ashok, B.; Nanthagopal, K.; Desai, Rohan; Rastogi, Nisha; Shetty, Siddhanth

    2017-11-01

    India is yet to accept semi/fully - autonomous cars and one of the reasons, was loss of control on bad roads. For a better handling on these roads we require advanced braking and that can be done by adapting electronics into the conventional type of braking. In Recent years, the automation in braking system led us to various benefits like traction control system, anti-lock braking system etc. This research work describes and experiments the method for recognizing road surface profile and calculating braking distance. An ultra-sonic surface recognition sensor, mounted underneath the car will send a high frequency wave on to the road surface, which is received by a receiver with in the sensor, it calculates the time taken for the wave to rebound and thus calculates the distance from the point where sensor is mounted. A displacement graph will be plotted based on the output of the sensor. A relationship can be derived between the displacement plot and roughness index through which the friction coefficient can be derived in Matlab for continuous calculation throughout the distance travelled. Since it is a non-contact type of profiling, it is non-destructive. The friction coefficient values received in real-time is used to calculate optimum braking distance. This system, when installed on normal cars can also be used to create a database of road surfaces, especially in cities, which can be shared with other cars. This will help in navigation as well as making the cars more efficient.

  14. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  15. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  16. [An operational remote sensing algorithm of land surface evapotranspiration based on NOAA PAL dataset].

    Science.gov (United States)

    Hou, Ying-Yu; He, Yan-Bo; Wang, Jian-Lin; Tian, Guo-Liang

    2009-10-01

    Based on the time series 10-day composite NOAA Pathfinder AVHRR Land (PAL) dataset (8 km x 8 km), and by using land surface energy balance equation and "VI-Ts" (vegetation index-land surface temperature) method, a new algorithm of land surface evapotranspiration (ET) was constructed. This new algorithm did not need the support from meteorological observation data, and all of its parameters and variables were directly inversed or derived from remote sensing data. A widely accepted ET model of remote sensing, i. e., SEBS model, was chosen to validate the new algorithm. The validation test showed that both the ET and its seasonal variation trend estimated by SEBS model and our new algorithm accorded well, suggesting that the ET estimated from the new algorithm was reliable, being able to reflect the actual land surface ET. The new ET algorithm of remote sensing was practical and operational, which offered a new approach to study the spatiotemporal variation of ET in continental scale and global scale based on the long-term time series satellite remote sensing images.

  17. Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface

    International Nuclear Information System (INIS)

    Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho

    2005-01-01

    While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm

  18. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits

    Directory of Open Access Journals (Sweden)

    Lieberman Rebecca M

    2008-04-01

    Full Text Available Abstract Background Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. Methods This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3. We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. Results We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64% cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8, often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2 identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2% true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86–92 for

  19. An Algorithm for Online Inertia Identification and Load Torque Observation via Adaptive Kalman Observer-Recursive Least Squares

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2018-03-01

    Full Text Available In this paper, an on-line parameter identification algorithm to iteratively compute the numerical values of inertia and load torque is proposed. Since inertia and load torque are strongly coupled variables due to the degenerate-rank problem, it is hard to estimate relatively accurate values for them in the cases such as when load torque variation presents or one cannot obtain a relatively accurate priori knowledge of inertia. This paper eliminates this problem and realizes ideal online inertia identification regardless of load condition and initial error. The algorithm in this paper integrates a full-order Kalman Observer and Recursive Least Squares, and introduces adaptive controllers to enhance the robustness. It has a better performance when iteratively computing load torque and moment of inertia. Theoretical sensitivity analysis of the proposed algorithm is conducted. Compared to traditional methods, the validity of the proposed algorithm is proved by simulation and experiment results.

  20. A MUSIC-Based Algorithm for Blind User Identification in Multiuser DS-CDMA

    Directory of Open Access Journals (Sweden)

    M. Reza Soleymani

    2005-04-01

    Full Text Available A blind scheme based on multiple-signal classification (MUSIC algorithm for user identification in a synchronous multiuser code-division multiple-access (CDMA system is suggested. The scheme is blind in the sense that it does not require prior knowledge of the spreading codes. Spreading codes and users' power are acquired by the scheme. Eigenvalue decomposition (EVD is performed on the received signal, and then all the valid possible signature sequences are projected onto the subspaces. However, as a result of this process, some false solutions are also produced and the ambiguity seems unresolvable. Our approach is to apply a transformation derived from the results of the subspace decomposition on the received signal and then to inspect their statistics. It is shown that the second-order statistics of the transformed signal provides a reliable means for removing the false solutions.

  1. Models for Evolutionary Algorithms and Their Applications in System Identification and Control Optimization

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    of handling problems with non-linear constraints, multiple objectives, and dynamic components – properties that frequently appear in real-world problems. This thesis presents research in three fundamental areas of EC; fitness function design, methods for parameter control, and techniques for multimodal...... optimization. In addition to general investigations in these areas, I introduce a number of algorithms and demonstrate their potential on real-world problems in system identification and control. Furthermore, I investigate dynamic optimization problems in the context of the three fundamental areas as well...... as control, which is a field where real-world dynamic problems appear. Regarding fitness function design, smoothness of the fitness landscape is of primary concern, because a too rugged landscape may disrupt the search and lead to premature convergence at local optima. Rugged fitness landscapes typically...

  2. WH-EA: An Evolutionary Algorithm for Wiener-Hammerstein System Identification

    Directory of Open Access Journals (Sweden)

    J. Zambrano

    2018-01-01

    Full Text Available Current methods to identify Wiener-Hammerstein systems using Best Linear Approximation (BLA involve at least two steps. First, BLA is divided into obtaining front and back linear dynamics of the Wiener-Hammerstein model. Second, a refitting procedure of all parameters is carried out to reduce modelling errors. In this paper, a novel approach to identify Wiener-Hammerstein systems in a single step is proposed. This approach is based on a customized evolutionary algorithm (WH-EA able to look for the best BLA split, capturing at the same time the process static nonlinearity with high precision. Furthermore, to correct possible errors in BLA estimation, the locations of poles and zeros are subtly modified within an adequate search space to allow a fine-tuning of the model. The performance of the proposed approach is analysed by using a demonstration example and a nonlinear system identification benchmark.

  3. Output-only modal dynamic identification of frames by a refined FDD algorithm at seismic input and high damping

    Science.gov (United States)

    Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio

    2016-02-01

    The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.

  4. Direct instrumental identification of catalytically active surface sites

    Science.gov (United States)

    Pfisterer, Jonas H. K.; Liang, Yunchang; Schneider, Oliver; Bandarenka, Aliaksandr S.

    2017-09-01

    The activity of heterogeneous catalysts—which are involved in some 80 per cent of processes in the chemical and energy industries—is determined by the electronic structure of specific surface sites that offer optimal binding of reaction intermediates. Directly identifying and monitoring these sites during a reaction should therefore provide insight that might aid the targeted development of heterogeneous catalysts and electrocatalysts (those that participate in electrochemical reactions) for practical applications. The invention of the scanning tunnelling microscope (STM) and the electrochemical STM promised to deliver such imaging capabilities, and both have indeed contributed greatly to our atomistic understanding of heterogeneous catalysis. But although the STM has been used to probe and initiate surface reactions, and has even enabled local measurements of reactivity in some systems, it is not generally thought to be suited to the direct identification of catalytically active surface sites under reaction conditions. Here we demonstrate, however, that common STMs can readily map the catalytic activity of surfaces with high spatial resolution: we show that by monitoring relative changes in the tunnelling current noise, active sites can be distinguished in an almost quantitative fashion according to their ability to catalyse the hydrogen-evolution reaction or the oxygen-reduction reaction. These data allow us to evaluate directly the importance and relative contribution to overall catalyst activity of different defects and sites at the boundaries between two materials. With its ability to deliver such information and its ready applicability to different systems, we anticipate that our method will aid the rational design of heterogeneous catalysts.

  5. Research on the target coverage algorithms for 3D curved surface

    International Nuclear Information System (INIS)

    Sun, Shunyuan; Sun, Li; Chen, Shu

    2016-01-01

    To solve the target covering problems in three-dimensional space, putting forward a deployment strategies of the target points innovatively, and referencing to the differential evolution (DE) algorithm to optimize the location coordinates of the sensor nodes to realize coverage of all the target points in 3-D surface with minimal sensor nodes. Firstly, building the three-dimensional perception model of sensor nodes, and putting forward to the blind area existing in the process of the sensor nodes sensing the target points in 3-D surface innovatively, then proving the feasibility of solving the target coverage problems in 3-D surface with DE algorithm theoretically, and reflecting the fault tolerance of the algorithm.

  6. Damage identification on spatial Timoshenko arches by means of genetic algorithms

    Science.gov (United States)

    Greco, A.; D'Urso, D.; Cannizzaro, F.; Pluchino, A.

    2018-05-01

    In this paper a procedure for the dynamic identification of damage in spatial Timoshenko arches is presented. The proposed approach is based on the calculation of an arbitrary number of exact eigen-properties of a damaged spatial arch by means of the Wittrick and Williams algorithm. The proposed damage model considers a reduction of the volume in a part of the arch, and is therefore suitable, differently than what is commonly proposed in the main part of the dedicated literature, not only for concentrated cracks but also for diffused damaged zones which may involve a loss of mass. Different damage scenarios can be taken into account with variable location, intensity and extension of the damage as well as number of damaged segments. An optimization procedure, aiming at identifying which damage configuration minimizes the difference between its eigen-properties and a set of measured modal quantities for the structure, is implemented making use of genetic algorithms. In this context, an initial random population of chromosomes, representing different damage distributions along the arch, is forced to evolve towards the fittest solution. Several applications with different, single or multiple, damaged zones and boundary conditions confirm the validity and the applicability of the proposed procedure even in presence of instrumental errors on the measured data.

  7. Technical note: Efficient online source identification algorithm for integration within a contamination event management system

    Science.gov (United States)

    Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai

    2017-07-01

    Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.

  8. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  9. The sensitivity of characteristics of cyclone activity to identification procedures in tracking algorithms

    Directory of Open Access Journals (Sweden)

    Irina Rudeva

    2014-12-01

    Full Text Available The IMILAST project (‘Intercomparison of Mid-Latitude Storm Diagnostics’ was set up to compare low-level cyclone climatologies derived from a number of objective identification algorithms. This paper is a contribution to that effort where we determine the sensitivity of three key aspects of Northern Hemisphere cyclone behaviour [namely the number of cyclones, their intensity (defined here in terms of the central pressure and their deepening rates] to specific features in the automatic cyclone identification. The sensitivity is assessed with respect to three such features which may be thought to influence the ultimate climatology produced (namely performance in areas of complicated orography, time of the detection of a cyclone, and the representation of rapidly propagating cyclones. We make use of 13 tracking methods in this analysis. We find that the filtering of cyclones in regions where the topography exceeds 1500 m can significantly change the total number of cyclones detected by a scheme, but has little impact on the cyclone intensity distribution. More dramatically, late identification of cyclones (simulated by the truncation of the first 12 hours of cyclone life cycle leads to a large reduction in cyclone numbers over the both continents and oceans (up to 80 and 40%, respectively. Finally, the potential splitting of the trajectories at times of the fastest propagation has a negligible climatological effect on geographical distribution of cyclone numbers. Overall, it has been found that the averaged deepening rates and averaged cyclone central pressure are rather insensitive to the specifics of the tracking procedure, being more sensitive to the data set used (as shown in previous studies and the geographical location of a cyclone.

  10. Algorithms for singularities and real structures of weak Del Pezzo surfaces

    KAUST Repository

    Lubbes, Niels

    2014-08-01

    In this paper, we consider the classification of singularities [P. Du Val, On isolated singularities of surfaces which do not affect the conditions of adjunction. I, II, III, Proc. Camb. Philos. Soc. 30 (1934) 453-491] and real structures [C. T. C. Wall, Real forms of smooth del Pezzo surfaces, J. Reine Angew. Math. 1987(375/376) (1987) 47-66, ISSN 0075-4102] of weak Del Pezzo surfaces from an algorithmic point of view. It is well-known that the singularities of weak Del Pezzo surfaces correspond to root subsystems. We present an algorithm which computes the classification of these root subsystems. We represent equivalence classes of root subsystems by unique labels. These labels allow us to construct examples of weak Del Pezzo surfaces with the corresponding singularity configuration. Equivalence classes of real structures of weak Del Pezzo surfaces are also represented by root subsystems. We present an algorithm which computes the classification of real structures. This leads to an alternative proof of the known classification for Del Pezzo surfaces and extends this classification to singular weak Del Pezzo surfaces. As an application we classify families of real conics on cyclides. © World Scientific Publishing Company.

  11. Damage identification in beams by a response surface based technique

    Directory of Open Access Journals (Sweden)

    Teidj S.

    2014-01-01

    Full Text Available In this work, identification of damage in uniform homogeneous metallic beams was considered through the propagation of non dispersive elastic torsional waves. The proposed damage detection procedure consisted of the following sequence. Giving a localized torque excitation, having the form of a short half-sine pulse, the first step was calculating the transient solution of the resulting torsional wave. This torque could be generated in practice by means of asymmetric laser irradiation of the beam surface. Then, a localized defect assumed to be characterized by an abrupt reduction of beam section area with a given height and extent was placed at a known location of the beam. Next, the response in terms of transverse section rotation rate was obtained for a point situated afterwards the defect, where the sensor was positioned. This last could utilize in practice the concept of laser vibrometry. A parametric study has been conducted after that by using a full factorial design of experiments table and numerical simulations based on a finite difference characteristic scheme. This has enabled the derivation of a response surface model that was shown to represent adequately the response of the system in terms of the following factors: defect extent and severity. The final step was performing the inverse problem solution in order to identify the defect characteristics by using measurement.

  12. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    Science.gov (United States)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  13. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  14. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  15. A Comparative and Experimental Study on Gradient and Genetic Optimization Algorithms for Parameter Identification of Linear MIMO Models of a Drilling Vessel

    Directory of Open Access Journals (Sweden)

    Bańka Stanisław

    2015-12-01

    Full Text Available The paper presents algorithms for parameter identification of linear vessel models being in force for the current operating point of a ship. Advantages and disadvantages of gradient and genetic algorithms in identifying the model parameters are discussed. The study is supported by presentation of identification results for a nonlinear model of a drilling vessel.

  16. Surface deformation recovery algorithm for reflector antennas based on geometric optics.

    Science.gov (United States)

    Huang, Jianhui; Jin, Huiliang; Ye, Qian; Meng, Guoxiang

    2017-10-02

    Surface deformations of large reflector antennas highly depend on elevation angle. This paper adopted a scheme with the ability to conduct measurement at any elevation angle: carrying an emission source, an unmanned aerial vehicle (UAV) scans the antenna on a near-field plane, meanwhile the antenna stays stationary. Near-field amplitude is measured in the scheme. To recover the deformation from the measured amplitude, this paper proposed a novel algorithm by deriving the deformation-amplitude equation, which reveals the relation between the surface deformation and the near-field amplitude. By the algorithm, a precise deformation recovery can be reached at a low frequency (<1GHz) through single near-field amplitude. Simulation results showed the high accuracy and adaptability of the algorithm.

  17. Algorithm for Automated Mapping of Land Surface Temperature Using LANDSAT 8 Satellite Data

    OpenAIRE

    Ugur Avdan; Gordana Jovanovska

    2016-01-01

    Land surface temperature is an important factor in many areas, such as global climate change, hydrological, geo-/biophysical, and urban land use/land cover. As the latest launched satellite from the LANDSAT family, LANDSAT 8 has opened new possibilities for understanding the events on the Earth with remote sensing. This study presents an algorithm for the automatic mapping of land surface temperature from LANDSAT 8 data. The tool was developed using the LANDSAT 8 thermal infrared sensor Band ...

  18. Algorithm for Automated Mapping of Land Surface Temperature Using LANDSAT 8 Satellite Data

    Directory of Open Access Journals (Sweden)

    Ugur Avdan

    2016-01-01

    Full Text Available Land surface temperature is an important factor in many areas, such as global climate change, hydrological, geo-/biophysical, and urban land use/land cover. As the latest launched satellite from the LANDSAT family, LANDSAT 8 has opened new possibilities for understanding the events on the Earth with remote sensing. This study presents an algorithm for the automatic mapping of land surface temperature from LANDSAT 8 data. The tool was developed using the LANDSAT 8 thermal infrared sensor Band 10 data. Different methods and formulas were used in the algorithm that successfully retrieves the land surface temperature to help us study the thermal environment of the ground surface. To verify the algorithm, the land surface temperature and the near-air temperature were compared. The results showed that, for the first case, the standard deviation was 2.4°C, and for the second case, it was 2.7°C. For future studies, the tool should be refined with in situ measurements of land surface temperature.

  19. A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Sabzi

    2018-03-01

    Full Text Available Accurate classification of fruit varieties in processing factories and during post-harvesting applications is a challenge that has been widely studied. This paper presents a novel approach to automatic fruit identification applied to three common varieties of oranges (Citrus sinensis L., namely Bam, Payvandi and Thomson. A total of 300 color images were used for the experiments, 100 samples for each orange variety, which are publicly available. After segmentation, 263 parameters, including texture, color and shape features, were extracted from each sample using image processing. Among them, the 6 most effective features were automatically selected by using a hybrid approach consisting of an artificial neural network and particle swarm optimization algorithm (ANN-PSO. Then, three different classifiers were applied and compared: hybrid artificial neural network – artificial bee colony (ANN-ABC; hybrid artificial neural network – harmony search (ANN-HS; and k-nearest neighbors (kNN. The experimental results show that the hybrid approaches outperform the results of kNN. The average correct classification rate of ANN-HS was 94.28%, while ANN-ABS achieved 96.70% accuracy with the available data, contrasting with the 70.9% baseline accuracy of kNN. Thus, this new proposed methodology provides a fast and accurate way to classify multiple fruits varieties, which can be easily implemented in processing factories. The main contribution of this work is that the method can be directly adapted to other use cases, since the selection of the optimal features and the configuration of the neural network are performed automatically using metaheuristic algorithms.

  20. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  1. Inversion of Land Surface Temperature (LST Using Terra ASTER Data: A Comparison of Three Algorithms

    Directory of Open Access Journals (Sweden)

    Milton Isaya Ndossi

    2016-12-01

    Full Text Available Land Surface Temperature (LST is an important measurement in studies related to the Earth surface’s processes. The Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER instrument onboard the Terra spacecraft is the currently available Thermal Infrared (TIR imaging sensor with the highest spatial resolution. This study involves the comparison of LSTs inverted from the sensor using the Split Window Algorithm (SWA, the Single Channel Algorithm (SCA and the Planck function. This study has used the National Oceanic and Atmospheric Administration’s (NOAA data to model and compare the results from the three algorithms. The data from the sensor have been processed by the Python programming language in a free and open source software package (QGIS to enable users to make use of the algorithms. The study revealed that the three algorithms are suitable for LST inversion, whereby the Planck function showed the highest level of accuracy, the SWA had moderate level of accuracy and the SCA had the least accuracy. The algorithms produced results with Root Mean Square Errors (RMSE of 2.29 K, 3.77 K and 2.88 K for the Planck function, the SCA and SWA respectively.

  2. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  3. Artificial immune algorithm implementation for optimized multi-axis sculptured surface CNC machining

    Science.gov (United States)

    Fountas, N. A.; Kechagias, J. D.; Vaxevanidis, N. M.

    2016-11-01

    This paper presents the results obtained by the implementation of an artificial immune algorithm to optimize standard multi-axis tool-paths applied to machine free-form surfaces. The investigation for its applicability was based on a full factorial experimental design addressing the two additional axes for tool inclination as independent variables whilst a multi-objective response was formulated by taking into consideration surface deviation and tool path time; objectives assessed directly from computer-aided manufacturing environment A standard sculptured part was developed by scratch considering its benchmark specifications and a cutting-edge surface machining tool-path was applied to study the effects of the pattern formulated when dynamically inclining a toroidal end-mill and guiding it towards the feed direction under fixed lead and tilt inclination angles. The results obtained form the series of the experiments were used for the fitness function creation the algorithm was about to sequentially evaluate. It was found that the artificial immune algorithm employed has the ability of attaining optimal values for inclination angles facilitating thus the complexity of such manufacturing process and ensuring full potentials in multi-axis machining modelling operations for producing enhanced CNC manufacturing programs. Results suggested that the proposed algorithm implementation may reduce the mean experimental objective value to 51.5%

  4. The MeSsI (merging systems identification) algorithm and catalogue.

    Science.gov (United States)

    de Los Rios, Martín; Domínguez R., Mariano J.; Paz, Dante; Merchán, Manuel

    2016-05-01

    Merging galaxy systems provide observational evidence of the existence of dark matter and constraints on its properties. Therefore, statistically uniform samples of merging systems would be a powerful tool for several studies. In this paper, we present a new methodology for the identification of merging systems and the results of its application to galaxy redshift surveys. We use as a starting point a mock catalogue of galaxy systems, identified using friends-of-friends algorithms, that have experienced a major merger, as indicated by its merger tree. By applying machine learning techniques in this training sample, and using several features computed from the observable properties of galaxy members, it is possible to select galaxy groups that have a high probability of having experienced a major merger. Next, we apply a mixture of Gaussian techniques on galaxy members in order to reconstruct the properties of the haloes involved in such mergers. This methodology provides a highly reliable sample of merging systems with low contamination and precisely recovered properties. We apply our techniques to samples of galaxy systems obtained from the Sloan Digital Sky Survey Data Release 7, the Wide-Field Nearby Galaxy-Cluster Survey (WINGS) and the Hectospec Cluster Survey (HeCS). Our results recover previously known merging systems and provide several new candidates. We present their measured properties and discuss future analysis on current and forthcoming samples.

  5. Damage Identification of Trusses with Elastic Supports Using FEM and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Nam-Il Kim

    2013-01-01

    Full Text Available The computationally efficient damage identification technique for truss structures with elastic supports is proposed based on the force method. To transform the truss with supports into the equivalent free-standing model without supports, the novel zero-length dummy members are employed. General equilibrium equations and kinematic relations, in which the reaction forces and the displacements at the elastic supports are taken into account, are clearly formulated. The compatibility equations, in terms of forces in which the flexibilities of elastic supports are considered, are explicitly presented using the singular value decomposition (SVD technique. Both member and reaction forces are simultaneously and directly obtained. Then, all nodal displacements including constrained nodes are back calculated from the member and reaction forces. Next, the microgenetic algorithm (MGA is used to properly identify the site and the extent of multiple damages in truss structures. In order to verify the superiority of the current study, the numerical solutions are presented for the planar and space truss models with and without elastic supports. The numerical results indicate that the computational effort required by this study is found to be significantly lower than that of the displacement method.

  6. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  7. A Pre-Detection Based Anti-Collision Algorithm with Adjustable Slot Size Scheme for Tag Identification

    Directory of Open Access Journals (Sweden)

    Chiu-Kuo LIANG

    2015-06-01

    Full Text Available One of the research areas in RFID systems is a tag anti-collision protocol; how to reduce identification time with a given number of tags in the field of an RFID reader. There are two types of tag anti-collision protocols for RFID systems: tree based algorithms and slotted aloha based algorithms. Many anti-collision algorithms have been proposed in recent years, especially in tree based protocols. However, there still have challenges on enhancing the system throughput and stability due to the underlying technologies had faced different limitation in system performance when network density is high. Particularly, the tree based protocols had faced the long identification delay. Recently, a Hybrid Hyper Query Tree (H2QT protocol, which is a tree based approach, was proposed and aiming to speedup tag identification in large scale RFID systems. The main idea of H2QT is to track the tag response and try to predict the distribution of tag IDs in order to reduce collisions. In this paper, we propose a pre-detection tree based algorithm, called the Adaptive Pre-Detection Broadcasting Query Tree algorithm (APDBQT, to avoid those unnecessary queries. Our proposed APDBQT protocol can reduce not only the collisions but the idle cycles as well by using pre-detection scheme and adjustable slot size mechanism. The simulation results show that our proposed technique provides superior performance in high density environments. It is shown that the APDBQT is effective in terms of increasing system throughput and minimizing identification delay.

  8. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  9. Fast centroid algorithm for determining the surface plasmon resonance angle using the fixed-boundary method

    International Nuclear Information System (INIS)

    Zhan, Shuyue; Wang, Xiaoping; Liu, Yuling

    2011-01-01

    To simplify the algorithm for determining the surface plasmon resonance (SPR) angle for special applications and development trends, a fast method for determining an SPR angle, called the fixed-boundary centroid algorithm, has been proposed. Two experiments were conducted to compare three centroid algorithms from the aspects of the operation time, sensitivity to shot noise, signal-to-noise ratio (SNR), resolution, and measurement range. Although the measurement range of this method was narrower, the other performance indices were all better than the other two centroid methods. This method has outstanding performance, high speed, good conformity, low error and a high SNR and resolution. It thus has the potential to be widely adopted

  10. A general rough-surface inversion algorithm: Theory and application to SAR data

    Science.gov (United States)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  11. Process optimization of rolling for zincked sheet technology using response surface methodology and genetic algorithm

    Science.gov (United States)

    Ji, Liang-Bo; Chen, Fang

    2017-07-01

    Numerical simulation and intelligent optimization technology were adopted for rolling and extrusion of zincked sheet. By response surface methodology (RSM), genetic algorithm (GA) and data processing technology, an efficient optimization of process parameters for rolling of zincked sheet was investigated. The influence trend of roller gap, rolling speed and friction factor effects on reduction rate and plate shortening rate were analyzed firstly. Then a predictive response surface model for comprehensive quality index of part was created using RSM. Simulated and predicted values were compared. Through genetic algorithm method, the optimal process parameters for the forming of rolling were solved. They were verified and the optimum process parameters of rolling were obtained. It is feasible and effective.

  12. CPU, GPU and FPGA Implementations of MALD: Ceramic Tile Surface Defects Detection Algorithm

    OpenAIRE

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko

    2014-01-01

    This paper addresses adjustments, implementation and performance comparison of the Moving Average with Local Difference (MALD) method for ceramic tile surface defects detection. Ceramic tile production process is completely autonomous, except the final stage where human eye is required for defects detection. Recent computational platform development and advances in machine vision provides us with several options for MALD algorithm implementation. In order to exploit the shortest execution tim...

  13. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    Science.gov (United States)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal

  14. An Efficient Surface Algorithm for Random-Particle Simulation of Vorticity and Heat Transport

    Science.gov (United States)

    Smith, P. A.; Stansby, P. K.

    1989-04-01

    A new surface algorithm has been incorporated into the random-vortex method for the simulation of 2-dimensional laminar flow, in which vortex particles are deleted rather than reflected as they cross a solid surface. This involves a modification to the strength and random walk of newly created vortex particles. Computations of the early stages of symmetric, impulsively started flow around a circular cylinder for a wide range of Reynolds numbers demonstrate that the number of vortices required for convergence is substantially reduced. The method has been further extended to accommodate forced convective heat transfer where temperature particles are created at a surface to satisfy the condition of constant surface temperature. Vortex and temperature particles are handled together throughout each time step. For long runs, in which a steady state is reached, comparison is made with some time-averaged experimental heat transfer data for Reynolds numbers up to a few hundred. A Karman vortex street occurs at the higher Reynolds numbers.

  15. A Robust Inversion Algorithm for Surface Leaf and Soil Temperatures Using the Vegetation Clumping Index

    Directory of Open Access Journals (Sweden)

    Zunjian Bian

    2017-07-01

    Full Text Available The inversion of land surface component temperatures is an essential source of information for mapping heat fluxes and the angular normalization of thermal infrared (TIR observations. Leaf and soil temperatures can be retrieved using multiple-view-angle TIR observations. In a satellite-scale pixel, the clumping effect of vegetation is usually present, but it is not completely considered during the inversion process. Therefore, we introduced a simple inversion procedure that uses gap frequency with a clumping index (GCI for leaf and soil temperatures over both crop and forest canopies. Simulated datasets corresponding to turbid vegetation, regularly planted crops and randomly distributed forest were generated using a radiosity model and were used to test the proposed inversion algorithm. The results indicated that the GCI algorithm performed well for both crop and forest canopies, with root mean squared errors of less than 1.0 °C against simulated values. The proposed inversion algorithm was also validated using measured datasets over orchard, maize and wheat canopies. Similar results were achieved, demonstrating that using the clumping index can improve inversion results. In all evaluations, we recommend using the GCI algorithm as a foundation for future satellite-based applications due to its straightforward form and robust performance for both crop and forest canopies using the vegetation clumping index.

  16. Identifying the Right Surface for the Right Patient at the Right Time: Generation and Content Validation of an Algorithm for Support Surface Selection

    Science.gov (United States)

    McNichol, Laurie; Watts, Carolyn; Mackey, Dianne; Beitz, Janice M.

    2015-01-01

    Support surfaces are an integral component of pressure ulcer prevention and treatment, but there is insufficient evidence to guide clinical decision making in this area. In an effort to provide clinical guidance for selecting support surfaces based on individual patient needs, the Wound, Ostomy and Continence Nurses Society (WOCN®) set out to develop an evidence- and consensus-based algorithm. A Task Force of clinical experts was identified who: 1) reviewed the literature and identified evidence for support surface use in the prevention and treatment of pressure ulcers; 2) developed supporting statements for essential components for the algorithm, 3) developed a draft algorithm for support surface selection; and 4) determined its face validity. A consensus panel of 20 key opinion leaders was then convened that: 1.) reviewed the draft algorithm and supporting statements, 2.) reached consensus on statements lacking robust supporting evidence, 3.) modified the draft algorithm and evaluated its content validity. The Content Validity Index (CVI) for the algorithm was strong (0.95 out of 1.0) with an overall mean score of 3.72 (out of 1 to 4), suggesting that the steps were appropriate to the purpose of the algorithm. To our knowledge, this is the first evidence and consensus based algorithm for support surface selection that has undergone content validation. PMID:25549306

  17. Identifying the right surface for the right patient at the right time: generation and content validation of an algorithm for support surface selection.

    Science.gov (United States)

    McNichol, Laurie; Watts, Carolyn; Mackey, Dianne; Beitz, Janice M; Gray, Mikel

    2015-01-01

    Support surfaces are an integral component of pressure ulcer prevention and treatment, but there is insufficient evidence to guide clinical decision making in this area. In an effort to provide clinical guidance for selecting support surfaces based on individual patient needs, the Wound, Ostomy and Continence Nurses Society (WOCN®) set out to develop an evidence- and consensus-based algorithm. A Task Force of clinical experts was identified who: 1) reviewed the literature and identified evidence for support surface use in the prevention and treatment of pressure ulcers; 2) developed supporting statements for essential components for the algorithm, 3) developed a draft algorithm for support surface selection; and 4) determined its face validity. A consensus panel of 20 key opinion leaders was then convened that: 1.) reviewed the draft algorithm and supporting statements, 2.) reached consensus on statements lacking robust supporting evidence, 3.) modified the draft algorithm and evaluated its content validity. The Content Validity Index (CVI) for the algorithm was strong (0.95 out of 1.0) with an overall mean score of 3.72 (out of 1 to 4), suggesting that the steps were appropriate to the purpose of the algorithm. To our knowledge, this is the first evidence and consensus based algorithm for support surface selection that has undergone content validation.

  18. Using subdivision surfaces and adaptive surface simplification algorithms for modeling chemical heterogeneities in geophysical flows

    Science.gov (United States)

    Schmalzl, JöRg; Loddoch, Alexander

    2003-09-01

    We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.

  19. MAPPING OF PLANETARY SURFACE AGE BASED ON CRATER STATISTICS OBTAINED BY AN AUTOMATIC DETECTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. L. Salih

    2016-06-01

    Full Text Available The analysis of the impact crater size-frequency distribution (CSFD is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to

  20. A Simple and Universal Aerosol Retrieval Algorithm for Landsat Series Images Over Complex Surfaces

    Science.gov (United States)

    Wei, Jing; Huang, Bo; Sun, Lin; Zhang, Zhaoyang; Wang, Lunche; Bilal, Muhammad

    2017-12-01

    Operational aerosol optical depth (AOD) products are available at coarse spatial resolutions from several to tens of kilometers. These resolutions limit the application of these products for monitoring atmospheric pollutants at the city level. Therefore, a simple, universal, and high-resolution (30 m) Landsat aerosol retrieval algorithm over complex urban surfaces is developed. The surface reflectance is estimated from a combination of top of atmosphere reflectance at short-wave infrared (2.22 μm) and Landsat 4-7 surface reflectance climate data records over densely vegetated areas and bright areas. The aerosol type is determined using the historical aerosol optical properties derived from the local urban Aerosol Robotic Network (AERONET) site (Beijing). AERONET ground-based sun photometer AOD measurements from five sites located in urban and rural areas are obtained to validate the AOD retrievals. Terra MODerate resolution Imaging Spectrometer Collection (C) 6 AOD products (MOD04) including the dark target (DT), the deep blue (DB), and the combined DT and DB (DT&DB) retrievals at 10 km spatial resolution are obtained for comparison purposes. Validation results show that the Landsat AOD retrievals at a 30 m resolution are well correlated with the AERONET AOD measurements (R2 = 0.932) and that approximately 77.46% of the retrievals fall within the expected error with a low mean absolute error of 0.090 and a root-mean-square error of 0.126. Comparison results show that Landsat AOD retrievals are overall better and less biased than MOD04 AOD products, indicating that the new algorithm is robust and performs well in AOD retrieval over complex surfaces. The new algorithm can provide continuous and detailed spatial distributions of AOD during both low and high aerosol loadings.

  1. An algorithm for detecting Trichodesmium surface blooms in the South Western Tropical Pacific

    Directory of Open Access Journals (Sweden)

    Y. Dandonneau

    2011-12-01

    Full Text Available Trichodesmium, a major colonial cyanobacterial nitrogen fixer, forms large blooms in NO3-depleted tropical oceans and enhances CO2 sequestration by the ocean due to its ability to fix dissolved dinitrogen. Thus, its importance in C and N cycles requires better estimates of its distribution at basin to global scales. However, existing algorithms to detect them from satellite have not yet been successful in the South Western Tropical Pacific (SP. Here, a novel algorithm (TRICHOdesmium SATellite based on radiance anomaly spectra (RAS observed in SeaWiFS imagery, is used to detect Trichodesmium during the austral summertime in the SP (5° S–25° S 160° E–170° W. Selected pixels are characterized by a restricted range of parameters quantifying RAS spectra (e.g. slope, intercept, curvature. The fraction of valid (non-cloudy pixels identified as Trichodesmium surface blooms in the region is low (between 0.01 and 0.2 %, but is about 100 times higher than deduced from previous algorithms. At daily scales in the SP, this fraction represents a total ocean surface area varying from 16 to 48 km2 in Winter and from 200 to 1000 km2 in Summer (and at monthly scale, from 500 to 1000 km2 in Winter and from 3100 to 10 890 km2 in Summer with a maximum of 26 432 km2 in January 1999. The daily distribution of Trichodesmium surface accumulations in the SP detected by TRICHOSAT is presented for the period 1998–2010 which demonstrates that the number of selected pixels peaks in November–February each year, consistent with field observations. This approach was validated with in situ observations of Trichodesmium surface accumulations in the Melanesian archipelago around New Caledonia, Vanuatu and Fiji Islands for the same period.

  2. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  3. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  4. Investigation of ALEGRA shock hydrocode algorithms using an exact free surface jet flow solution.

    Energy Technology Data Exchange (ETDEWEB)

    Hanks, Bradley Wright.; Robinson, Allen C

    2014-01-01

    Computational testing of the arbitrary Lagrangian-Eulerian shock physics code, ALEGRA, is presented using an exact solution that is very similar to a shaped charge jet flow. The solution is a steady, isentropic, subsonic free surface flow with significant compression and release and is provided as a steady state initial condition. There should be no shocks and no entropy production throughout the problem. The purpose of this test problem is to present a detailed and challenging computation in order to provide evidence for algorithmic strengths and weaknesses in ALEGRA which should be examined further. The results of this work are intended to be used to guide future algorithmic improvements in the spirit of test-driven development processes.

  5. Accurate identification of microseismic P- and S-phase arrivals using the multi-step AIC algorithm

    Science.gov (United States)

    Zhu, Mengbo; Wang, Liguan; Liu, Xiaoming; Zhao, Jiaxuan; Peng, Ping'an

    2018-03-01

    Identification of P- and S-phase arrivals is the primary work in microseismic monitoring. In this study, a new multi-step AIC algorithm is proposed. This algorithm consists of P- and S-phase arrival pickers (P-picker and S-picker). The P-picker contains three steps: in step 1, a preliminary P-phase arrival window is determined by the waveform peak. Then a preliminary P-pick is identified using the AIC algorithm. Finally, the P-phase arrival window is narrowed based on the above P-pick. Thus the P-phase arrival can be identified accurately by using the AIC algorithm again. The S-picker contains five steps: in step 1, a narrow S-phase arrival window is determined based on the P-pick and the AIC curve of amplitude biquadratic time-series. In step 2, the S-picker automatically judges whether the S-phase arrival is clear to identify. In step 3 and 4, the AIC extreme points are extracted, and the relationship between the local minimum and the S-phase arrival is researched. In step 5, the S-phase arrival is picked based on the maximum probability criterion. To evaluate of the proposed algorithm, a P- and S-picks classification criterion is also established based on a source location numerical simulation. The field data tests show a considerable improvement of the multi-step AIC algorithm in comparison with the manual picks and the original AIC algorithm. Furthermore, the technique is independent of the kind of SNR. Even in the poor-quality signal group which the SNRs are below 5, the effective picking rates (the corresponding location error is <15 m) of P- and S-phase arrivals are still up to 80.9% and 76.4% respectively.

  6. Method for Walking Gait Identification in a Lower Extremity Exoskeleton Based on C4.5 Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Qing Guo

    2015-04-01

    Full Text Available A gait identification method for a lower extremity exoskeleton is presented in order to identify the gait sub-phases in human-machine coordinated motion. First, a sensor layout for the exoskeleton is introduced. Taking the difference between human lower limb motion and human-machine coordinated motion into account, the walking gait is divided into five sub-phases, which are ‘double standing’, ‘right leg swing and left leg stance’, ‘double stance with right leg front and left leg back’, ‘right leg stance and left leg swing’, and ‘double stance with left leg front and right leg back’. The sensors include shoe pressure sensors, knee encoders, and thigh and calf gyroscopes, and are used to measure the contact force of the foot, and the knee joint angle and its angular velocity. Then, five sub-phases of walking gait are identified by a C4.5 decision tree algorithm according to the data fusion of the sensors' information. Based on the simulation results for the gait division, identification accuracy can be guaranteed by the proposed algorithm. Through the exoskeleton control experiment, a division of five sub-phases for the human-machine coordinated walk is proposed. The experimental results verify this gait division and identification method. They can make hydraulic cylinders retract ahead of time and improve the maximal walking velocity when the exoskeleton follows the person's motion.

  7. Identification and characterization of the surface proteins of Clostridium difficile

    Energy Technology Data Exchange (ETDEWEB)

    Dailey, D.C.

    1988-01-01

    Several clostridial proteins were detected on the clostridial cell surface by sensitive radioiodination techniques. Two major proteins and six minor proteins comprised the radioiodinated proteins on the clostridial cell surface. Cellular fractionation of surface radiolabeled C. difficile determined that the radioiodinated proteins were found in the cell wall fraction of C. difficile and surprisingly were also present in the clostridial membrane. Furthermore, an interesting phenomenon of disulfide-crosslinking of the cell surface proteins of C. difficile was observed. Disulfide-linked protein complexes were found in both the membrane and cell wall fractions. In addition, the cell surface proteins of C. difficile were found to be released into the culture medium. In attempts to further characterize the clostridial proteins recombinant DNA techniques were employed. In addition, the role of the clostridial cell surface proteins in the interactions of C. difficile with human PMNs was also investigated.

  8. Novel meta-surface design synthesis via nature-inspired optimization algorithms

    Science.gov (United States)

    Bayraktar, Zikri

    Heuristic numerical optimization algorithms have been gaining interest over the years as the computational power of the digital computers increases at an unprecedented level every year. While mature techniques such as the Genetic Algorithm increase their application areas, researchers also try to come up with new algorithms by simply observing the highly tuned processes provided by the nature. In this dissertation, the well-known Genetic Algorithm (GA) will be utilized to tackle various novel electromagnetic optimization problems, along with parallel implementation of the Clonal Selection Algorithm (CLONALG) and newly introduced the Wind Driven Optimization (WDO) technique. The utility of the CLONALG parallelization and the efficiency of the WDO will be illustrated by applying them to multi-dimensional and multi-modal electromagnetics problems such as antenna design and metamaterial surface synthesis. One of the metamaterial application areas is the design synthesis of 90 degrees rotationally symmetric ultra-small unit cell artificial magnetic conducting (AMC) surfaces. AMCs are composite metallo-dielectric structures designed to behave as perfect magnetic conductors (PMC) over a certain frequency range, those exhibit a reflection coefficient magnitude of unity with an phase angle of zero degrees at the center of the band. The proposed designs consist of ultra small sized frequency selective surface (FSS) unit cells that are tightly packed and highly intertwined, yet achieve remarkable AMC band performance and field of view when compared to current state-of-the-art AMCs. In addition, planar double-sided AMC (DSAMC) structures are introduced and optimized as AMC ground planes for low profile antennas in composite platforms and separator slabs for vertical antenna applications. The proposed designs do not possess complete metallic ground planes, which makes them ideal for composite and multi-antenna systems. The versatility of the DSAMC slabs is also illustrated

  9. Radar target identification by natural resonances: Evaluation of signal processing algorithms

    Science.gov (United States)

    Lazarakos, Gregory A.

    1991-09-01

    When a radar pulse impinges upon a target, the resultant scattering process can be solved as a linear time-invariant (LTI) system problem. The system has a transfer function with poles and zeros. Previous work has shown that the poles are independent on the target's structure and geometry. This thesis evaluates the resonance estimation performance of two signal processing techniques: the Kumaresan-Tufts algorithm and the Cadzow-Solomon algorithm. Improvements are made to the Cadzow-Solomon algorithm. Both algorithms are programmed using MATLAB. Test data used to evaluate these algorithms includes synthetic and integral equation generated signals, with and without additive noise, in addition to new experimental scattering data from a thin wire, aluminum spheres, and scale model aircraft.

  10. An algorithm for three-dimensional Monte-Carlo simulation of charge distribution at biofunctionalized surfaces

    KAUST Repository

    Bulyha, Alena

    2011-01-01

    In this work, a Monte-Carlo algorithm in the constant-voltage ensemble for the calculation of 3d charge concentrations at charged surfaces functionalized with biomolecules is presented. The motivation for this work is the theoretical understanding of biofunctionalized surfaces in nanowire field-effect biosensors (BioFETs). This work provides the simulation capability for the boundary layer that is crucial in the detection mechanism of these sensors; slight changes in the charge concentration in the boundary layer upon binding of analyte molecules modulate the conductance of nanowire transducers. The simulation of biofunctionalized surfaces poses special requirements on the Monte-Carlo simulations and these are addressed by the algorithm. The constant-voltage ensemble enables us to include the right boundary conditions; the dna strands can be rotated with respect to the surface; and several molecules can be placed in a single simulation box to achieve good statistics in the case of low ionic concentrations relevant in experiments. Simulation results are presented for the leading example of surfaces functionalized with pna and with single- and double-stranded dna in a sodium-chloride electrolyte. These quantitative results make it possible to quantify the screening of the biomolecule charge due to the counter-ions around the biomolecules and the electrical double layer. The resulting concentration profiles show a three-layer structure and non-trivial interactions between the electric double layer and the counter-ions. The numerical results are also important as a reference for the development of simpler screening models. © 2011 The Royal Society of Chemistry.

  11. An algorithm to use higher order invariants for modelling potential energy surface of nanoclusters

    Science.gov (United States)

    Jindal, Shweta; Bulusu, Satya S.

    2018-02-01

    In order to fit potential energy surface (PES) of gold nanoclusters, we have integrated bispectrum features with artificial neural network (ANN) learning technique in this work. We have also devised an algorithm for selecting the frequencies that need to be coupled for extracting the phase information between different frequency bands. We have found that higher order invariant like bispectrum is highly efficient in exploring the PES as compared to other invariants. The sensitivity of bispectrum can also be exploited in acting as an order parameter for calculating many thermodynamic properties of nanoclusters.

  12. Homogenisation algorithm skill testing with synthetic global benchmarks for the International Surface Temperature Initiative

    Science.gov (United States)

    Willet, Katherine; Venema, Victor; Williams, Claude; Aguilar, Enric; joliffe, Ian; Alexander, Lisa; Vincent, Lucie; Lund, Robert; Menne, Matt; Thorne, Peter; Auchmann, Renate; Warren, Rachel; Bronniman, Stefan; Thorarinsdotir, Thordis; Easterbrook, Steve; Gallagher, Colin; Lopardo, Giuseppina; Hausfather, Zeke; Berry, David

    2015-04-01

    Our surface temperature data are good enough to give us confidence that the world has warmed since 1880. However, they are not perfect - we cannot be precise in the amount of warming for the globe and especially for small regions or specific locations. Inhomogeneity (non-climate changes to the station record) is a major problem. While progress in detection of, and adjustment for inhomogeneities is continually advancing, monitoring effectiveness on large networks and gauging respective improvements in climate data quality is non-trivial. There is currently no internationally recognised means of robustly assessing the effectiveness of homogenisation methods on real data - and thus, the inhomogeneity uncertainty in those data. Here I present the work of the International Surface Temperature Initiative (ISTI; www.surfacetemperatures.org) Benchmarking working group. The aim is to quantify homogenisation algorithm skill on the global scale against realistic benchmarks. This involves the creation of synthetic worlds of surface temperature data, deliberate contamination of these with known errors and then assessment of the ability of homogenisation algorithms to detect and remove these errors. The ultimate aim is threefold: quantifying uncertainties in surface temperature data; enabling more meaningful product intercomparison; and improving homogenisation methods. There are five components work: 1. Create 30000 synthetic benchmark stations that look and feel like the real global temperature network, but do not contain any inhomogeneities: analog clean-worlds. 2. Design a set of error models which mimic the main types of inhomogeneities found in practice, and combined them with the analog clean-worlds to give analog error-worlds. 3. Engage with dataset creators to run their homogenisation algorithms blind on the analog error-world stations as they have done with the real data. 4. Design an assessment framework to gauge the degree to which analog error-worlds are returned to

  13. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  14. Testing an alternative search algorithm for compound identification with the 'Wiley Registry of Tandem Mass Spectral Data, MSforID'.

    Science.gov (United States)

    Oberacher, Herbert; Whitley, Graeme; Berger, Bernd; Weinmann, Wolfgang

    2013-04-01

    A tandem mass spectral database system consists of a library of reference spectra and a search program. State-of-the-art search programs show a high tolerance for variability in compound-specific fragmentation patterns produced by collision-induced decomposition and enable sensitive and specific 'identity search'. In this communication, performance characteristics of two search algorithms combined with the 'Wiley Registry of Tandem Mass Spectral Data, MSforID' (Wiley Registry MSMS, John Wiley and Sons, Hoboken, NJ, USA) were evaluated. The search algorithms tested were the MSMS search algorithm implemented in the NIST MS Search program 2.0g (NIST, Gaithersburg, MD, USA) and the MSforID algorithm (John Wiley and Sons, Hoboken, NJ, USA). Sample spectra were acquired on different instruments and, thus, covered a broad range of possible experimental conditions or were generated in silico. For each algorithm, more than 30,000 matches were performed. Statistical evaluation of the library search results revealed that principally both search algorithms can be combined with the Wiley Registry MSMS to create a reliable identification tool. It appears, however, that a higher degree of spectral similarity is necessary to obtain a correct match with the NIST MS Search program. This characteristic of the NIST MS Search program has a positive effect on specificity as it helps to avoid false positive matches (type I errors), but reduces sensitivity. Thus, particularly with sample spectra acquired on instruments differing in their setup from tandem-in-space type fragmentation, a comparably higher number of false negative matches (type II errors) were observed by searching the Wiley Registry MSMS. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Hardware/firmware implementation of a soft sensor using an improved version of a fuzzy identification algorithm.

    Science.gov (United States)

    Garcia, Claudio; de Carvalho Berni, Cássio; de Oliveira, Carlos Eduardo Neri

    2008-04-01

    This paper presents the design and implementation of an embedded soft sensor, i.e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called "Limited Rules", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller.

  16. Testing the algorithms for automatic identification of errors on the measured quantities of the nuclear power plant. Verification tests

    International Nuclear Information System (INIS)

    Svatek, J.

    1999-12-01

    During the development and implementation of supporting software for the control room and emergency control centre at the Dukovany nuclear power plant it appeared necessary to validate the input quantities in order to assure operating reliability of the software tools. Therefore, the development of software for validation of the measured quantities of the plant data sources was initiated, and the software had to be debugged and verified. The report contains the proposal for and description of the verification tests for testing the algorithms of automatic identification of errors on the observed quantities of the NPP by means of homemade validation software. In particular, the algorithms treated serve the validation of the hot leg temperature at primary circuit loop no. 2 or 4 at the Dukovany-2 reactor unit using data from the URAN and VK3 information systems, recorded during 3 different days. (author)

  17. Locating Critical Circular and Unconstrained Failure Surface in Slope Stability Analysis with Tailored Genetic Algorithm

    Science.gov (United States)

    Pasik, Tomasz; van der Meij, Raymond

    2017-12-01

    This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.

  18. Moderate Resolution Imaging Spectroradiometer (MODIS) MOD21 Land Surface Temperature and Emissivity Algorithm Theoretical Basis Document

    Science.gov (United States)

    Hulley, G.; Malakar, N.; Hughes, T.; Islam, T.; Hook, S.

    2016-01-01

    This document outlines the theory and methodology for generating the Moderate Resolution Imaging Spectroradiometer (MODIS) Level-2 daily daytime and nighttime 1-km land surface temperature (LST) and emissivity product using the Temperature Emissivity Separation (TES) algorithm. The MODIS-TES (MOD21_L2) product, will include the LST and emissivity for three MODIS thermal infrared (TIR) bands 29, 31, and 32, and will be generated for data from the NASA-EOS AM and PM platforms. This is version 1.0 of the ATBD and the goal is maintain a 'living' version of this document with changes made when necessary. The current standard baseline MODIS LST products (MOD11*) are derived from the generalized split-window (SW) algorithm (Wan and Dozier 1996), which produces a 1-km LST product and two classification-based emissivities for bands 31 and 32; and a physics-based day/night algorithm (Wan and Li 1997), which produces a 5-km (C4) and 6-km (C5) LST product and emissivity for seven MODIS bands: 20, 22, 23, 29, 31-33.

  19. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  20. The development of small-scale mechanization means positioning algorithm using radio frequency identification technology in industrial plants

    Science.gov (United States)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems for small mechanization in industrial plants based on radio frequency identification methods, which will be the basis for creating highly efficient intelligent systems for controlling the product movement in industrial enterprises. The main standards that are applied in the field of product movement control automation and radio frequency identification are considered. The article reviews modern publications and automation systems for the control of product movement developed by domestic and foreign manufacturers. It describes the developed algorithm for positioning of small-scale mechanization means in an industrial enterprise. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  1. Applicability of data mining algorithms in the identification of beach features/patterns on high-resolution satellite data

    Science.gov (United States)

    Teodoro, Ana C.

    2015-01-01

    The available beach classification algorithms and sediment budget models are mainly based on in situ parameters, usually unavailable for several coastal areas. A morphological analysis using remotely sensed data is a valid alternative. This study focuses on the application of data mining techniques, particularly decision trees (DTs) and artificial neural networks (ANNs) to an IKONOS-2 image in order to identify beach features/patterns in a stretch of the northwest coast of Portugal. Based on knowledge of the coastal features, five classes were defined. In the identification of beach features/patterns, the ANN algorithm presented an overall accuracy of 98.6% and a kappa coefficient of 0.97. The best DTs algorithm (with pruning) presents an overall accuracy of 98.2% and a kappa coefficient of 0.97. The results obtained through the ANN and DTs were in agreement. However, the ANN presented a classification more sensitive to rip currents. The use of ANNs and DTs for beach classification from remotely sensed data resulted in an increased classification accuracy when compared with traditional classification methods. The association of remotely sensed high-spatial resolution data and data mining algorithms is an effective methodology with which to identify beach features/patterns.

  2. A parallel implementation of the network identification by multiple regression (NIR) algorithm to reverse-engineer regulatory gene networks.

    Science.gov (United States)

    Gregoretti, Francesco; Belcastro, Vincenzo; di Bernardo, Diego; Oliva, Gennaro

    2010-04-21

    The reverse engineering of gene regulatory networks using gene expression profile data has become crucial to gain novel biological knowledge. Large amounts of data that need to be analyzed are currently being produced due to advances in microarray technologies. Using current reverse engineering algorithms to analyze large data sets can be very computational-intensive. These emerging computational requirements can be met using parallel computing techniques. It has been shown that the Network Identification by multiple Regression (NIR) algorithm performs better than the other ready-to-use reverse engineering software. However it cannot be used with large networks with thousands of nodes--as is the case in biological networks--due to the high time and space complexity. In this work we overcome this limitation by designing and developing a parallel version of the NIR algorithm. The new implementation of the algorithm reaches a very good accuracy even for large gene networks, improving our understanding of the gene regulatory networks that is crucial for a wide range of biomedical applications.

  3. Bioinformatics and mass spectrometry for microorganism identification: proteome-wide post-translational modifications and database search algorithms for characterization of intact H. pylori.

    Science.gov (United States)

    Demirev, P A; Lin, J S; Pineda, F J; Fenselaut, C

    2001-10-01

    MALDI-TOF mass spectrometry has been coupled with Internet-based proteome database search algorithms in an approach for direct microorganism identification. This approach is applied here to characterize intact H. pylori (strain 26695) Gram-negative bacteria, the most ubiquitous human pathogen. A procedure for including a specific and common posttranslational modification, N-terminal Met cleavage, in the search algorithm is described. Accounting for posttranslational modifications in putative protein biomarkers improves the identification reliability by at least an order of magnitude. The influence of other factors, such as number of detected biomarker peaks, proteome size, spectral calibration, and mass accuracy, on the microorganism identification success rate is illustrated as well.

  4. ParMap, an algorithm for the identification of small genomic insertions and deletions in nextgen sequencing data

    Directory of Open Access Journals (Sweden)

    Palomero Teresa

    2010-05-01

    Full Text Available Abstract Background Next-generation sequencing produces high-throughput data, albeit with greater error and shorter reads than traditional Sanger sequencing methods. This complicates the detection of genomic variations, especially, small insertions and deletions. Findings Here we describe ParMap, a statistical algorithm for the identification of complex genetic variants, such as small insertion and deletions, using partially mapped reads in nextgen sequencing data. Conclusions We report ParMap's successful application to the mutation analysis of chromosome X exome-captured leukemia DNA samples.

  5. Derivation of Land Surface Temperature for Landsat-8 TIRS Using a Split Window Algorithm

    Directory of Open Access Journals (Sweden)

    Offer Rozenstein

    2014-03-01

    Full Text Available Land surface temperature (LST is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS. This paper presents an adjustment of the split window algorithm (SWA for TIRS that uses atmospheric transmittance and land surface emissivity (LSE as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.

  6. Continuity Evaluation of Surface Retrieval Algorithms for ICESat-2/ATLAS Photon-Counting Laser Altimetry Data Products

    Science.gov (United States)

    Leigh, H. W.; Magruder, L. A.

    2016-12-01

    The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) mission team is developing algorithms to produce along-track and gridded science data products for a variety of surface types, including land ice, sea ice, land, and ocean. The ATL03 data product will contain geolocated photons for each of the six beams, and will incorporate geophysical corrections as well as preliminary photon classifications (signal vs. background). Higher level along-track and gridded data products for various surface types are processed using the ATL03 geolocated photons. The data processing schemes developed for each of the surface types rely on independent surface finding algorithms optimized to identify ground, canopy, water, or ice surfaces, as appropriate for a particular geographical location. In such cases where multiple surface types are present in close proximity to one another (e.g. land-ocean interface), multiple surface finding algorithms may be employed to extract surfaces along the same segment of a lidar profile. This study examines the effects on continuity of the various surface finding algorithms, specifically for littoral/coastal areas. These areas are important to the cryospheric, hydrologic, and biospheric communities in that continuity between the respective surface elevation products is required to fully utilize the information provided by ICESat-2 and its Advanced Topographic Laser Altimeter System (ATLAS) instrument.

  7. Identification and characterization of Vibrio cholerae surface proteins by radioiodination

    International Nuclear Information System (INIS)

    Richardson, K.; Parker, C.D.

    1985-01-01

    Whole cells and isolated outer membrane from Vibrio cholerae (Classical, Inaba) were radiolabeled with Iodogen or Iodo-beads as catalyst. Radiolabeling of whole cells was shown to be surface specific by sodium dodecyl sulfate-urea polyacrylamide gel electrophoresis of whole cells and cell fractions. Surface-labeled whole cells regularly showed 16 distinguishable protein species, of which nine were found in radiolabeled outer membrane preparations obtained by a lithium chloride- lithium acetate procedure. Eight of these proteins were found in outer membranes prepared by sucrose density gradient centrifugation and Triton X-100 extraction of radiolabeled whole cells. The mobility of several proteins was shown to be affected by temperature, and the major protein species exposed on the cell surface was shown to consist of at least two different peptides

  8. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Liang Zhang

    2015-08-01

    Full Text Available Internet of Things (IoT is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN k-Nearest Neighbor (KNN algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  9. An Assessment of Surface Water Detection Algorithms for the Tahoua Region, Niger

    Science.gov (United States)

    Herndon, K. E.; Muench, R.; Cherrington, E. A.; Griffin, R.

    2017-12-01

    The recent release of several global surface water datasets derived from remotely sensed data has allowed for unprecedented analysis of the earth's hydrologic processes at a global scale. However, some of these datasets fail to identify important sources of surface water, especially small ponds, in the Sahel, an arid region of Africa that forms a border zone between the Sahara Desert to the north, and the savannah to the south. These ponds may seem insignificant in the context of wider, global-scale hydrologic processes, but smaller sources of water are important for local and regional assessments. Particularly, these smaller water bodies are significant sources of hydration and irrigation for nomadic pastoralists and smallholder farmers throughout the Sahel. For this study, several methods of identifying surface water from Landsat 8 OLI and Sentinel 1 SAR data were compared to determine the most effective means of delineating these features in the Tahoua Region of Niger. The Modified Normalized Difference Water Index (MNDWI) had the best performance when validated against very high resolution World View 3 imagery, with an overall accuracy of 99.48%. This study reiterates the importance of region-specific algorithms and suggests that the MNDWI method may be the best for delineating surface water in the Sahelian ecozone, likely due to the nature of the exposed geology and lack of dense green vegetation.

  10. Effects of spectrometer band pass, sampling, and signal-to-noise ratio on spectral identification using the Tetracorder algorithm

    Science.gov (United States)

    Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.

    2003-01-01

    Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to

  11. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    Science.gov (United States)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts

  12. A Novel Algorithm for Feature Level Fusion Using SVM Classifier for Multibiometrics-Based Person Identification

    Directory of Open Access Journals (Sweden)

    Ujwalla Gawande

    2013-01-01

    Full Text Available Recent times witnessed many advancements in the field of biometric and ultimodal biometric fields. This is typically observed in the area, of security, privacy, and forensics. Even for the best of unimodal biometric systems, it is often not possible to achieve a higher recognition rate. Multimodal biometric systems overcome various limitations of unimodal biometric systems, such as nonuniversality, lower false acceptance, and higher genuine acceptance rates. More reliable recognition performance is achievable as multiple pieces of evidence of the same identity are available. The work presented in this paper is focused on multimodal biometric system using fingerprint and iris. Distinct textual features of the iris and fingerprint are extracted using the Haar wavelet-based technique. A novel feature level fusion algorithm is developed to combine these unimodal features using the Mahalanobis distance technique. A support-vector-machine-based learning algorithm is used to train the system using the feature extracted. The performance of the proposed algorithms is validated and compared with other algorithms using the CASIA iris database and real fingerprint database. From the simulation results, it is evident that our algorithm has higher recognition rate and very less false rejection rate compared to existing approaches.

  13. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    Energy Technology Data Exchange (ETDEWEB)

    Bae, JangPyo [Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul 110-744, South Korea and Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Hee Chan [Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 110-744 (Korea, Republic of)

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  14. Identification of Surface Exposed Elementary Body Antigens of ...

    African Journals Online (AJOL)

    This study sought to identify the surface exposed antigenic components of Cowdria ruminantium elementary body (EB) by biotin labeling, determine effect of reducing and non-reducing conditions and heat on the mobility of these antigens and their reactivity to antibodies from immunized animals by Western blotting.

  15. Identification of modal strains using sub-microstrain FBG data and a novel wavelength-shift detection algorithm

    Science.gov (United States)

    Anastasopoulos, Dimitrios; Moretti, Patrizia; Geernaert, Thomas; De Pauw, Ben; Nawrot, Urszula; De Roeck, Guido; Berghmans, Francis; Reynders, Edwin

    2017-03-01

    The presence of damage in a civil structure alters its stiffness and consequently its modal characteristics. The identification of these changes can provide engineers with useful information about the condition of a structure and constitutes the basic principle of the vibration-based structural health monitoring. While eigenfrequencies and mode shapes are the most commonly monitored modal characteristics, their sensitivity to structural damage may be low relative to their sensitivity to environmental influences. Modal strains or curvatures could offer an attractive alternative but current measurement techniques encounter difficulties in capturing the very small strain (sub-microstrain) levels occurring during ambient, or operational excitation, with sufficient accuracy. This paper investigates the ability to obtain sub-microstrain accuracy with standard fiber-optic Bragg gratings using a novel optical signal processing algorithm that identifies the wavelength shift with high accuracy and precision. The novel technique is validated in an extensive experimental modal analysis test on a steel I-beam which is instrumented with FBG sensors at its top and bottom flange. The raw wavelength FBG data are processed into strain values using both a novel correlation-based processing technique and a conventional peak tracking technique. Subsequently, the strain time series are used for identifying the beam's modal characteristics. Finally, the accuracy of both algorithms in identification of modal characteristics is extensively investigated.

  16. Application of the MOVE algorithm for the identification of reduced order models of a core of a BWR type reactor

    International Nuclear Information System (INIS)

    Victoria R, M.A.; Morales S, J.B.

    2005-01-01

    Presently work is applied the modified algorithm of the ellipsoid of optimal volume (MOVE) to a reduced order model of 5 differential equations of the core of a boiling water reactor (BWR) with the purpose of estimating the parameters that model the dynamics. The viability is analyzed of carrying out an analysis that calculates the global dynamic parameters that determine the stability of the system and the uncertainty of the estimate. The modified algorithm of the ellipsoid of optimal volume (MOVE), is a method applied to the parametric identification of systems, in particular to the estimate of groups of parameters (PSE for their initials in English). It is looked for to obtain the ellipsoid of smaller volume that guarantees to contain the real value of the parameters of the model. The PSE MOVE is a recursive identification method that can manage the sign of noise and to ponder it, the ellipsoid represents an advantage due to its easy mathematical handling in the computer, the results that surrender are very useful for the design of Robust Control since to smaller volume of the ellipsoid, better is in general the performance of the system to control. The comparison with other methods presented in the literature to estimate the reason of decline (DR) of a BWR is presented. (Author)

  17. An Algorithm for Retrieving Land Surface Temperatures Using VIIRS Data in Combination with Multi-Sensors

    Science.gov (United States)

    Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao

    2014-01-01

    A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919

  18. Therapeutic eyelids hygiene in the algorithms of prevention and treatment of ocular surface diseases

    Directory of Open Access Journals (Sweden)

    V. N. Trubilin

    2016-01-01

    Full Text Available When acute inflammation in anterior eye segment of a forward piece of an eye was stopped, ophthalmologists face a problem of absence of acute inflammation signs and at the same time complaints to the remain discomfort feelings. It causes dissatisfaction from the treatment. The complaints are typically caused by disturbance of tears productions. No accidental that the new group of diseases was allocated — the diseases of the ocular surface. Ocular surface is a difficult biologic system, including epithelium of the conjunctiva, cornea and limb, as well as the area costal margin eyelid and meibomian gland ducts. Pathological processes in conjunctiva, cornea and eyelids are linked with tears production. Ophthalmologists prescribes tears substitutions, providing short-term relief to patients. However, in respect that the lipid component of the tear film plays the key role in the preservation of its stability, eyelids hygiene is the basis for the treatment of dry eye associated with ocular surface diseases. Eyelids hygiene provides normal functioning of glands, restores the metabolic processes in skin and ensures the formation of a complete tear film. Protection of eyelids, especially the marginal edge from aggressive environmental agents, infections and parasites and is the basis for the prevention and treatment of blepharitis and dry eye syndrome. The most common clinical situations and algorithms of their treatment and prevention of dysfunction of the meibomian glands; demodectic blepharitis; seborrheic blepharitis; staphylococcal blepharitis; allergic blepharitis; barley and chalazion are discussed in the article. The prevention keratoconjunctival xerosis (before and postoperative period, caused by contact lenses, computer vision syndrome, remission after acute conjunctiva and cornea inflammation is also presented. The first part of the article presents the treatment and prevention algorithms for dysfunction of the meibomian glands, as well as

  19. How to detect Edgar Allan Poe's 'purloined letter,' or cross-correlation algorithms in digitized video images for object identification, movement evaluation, and deformation analysis

    Science.gov (United States)

    Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd

    2003-07-01

    Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.

  20. Noninvasive Fetal Electrocardiography Part I: Pan-Tompkins' Algorithm Adaptation to Fetal R-peak Identification.

    Science.gov (United States)

    Agostinelli, Angela; Marcantoni, Ilaria; Moretti, Elisa; Sbrollini, Agnese; Fioretti, Sandro; Di Nardo, Francesco; Burattini, Laura

    2017-01-01

    Indirect fetal electrocardiography is preferable to direct fetal electrocardiography because of being noninvasive and is applicable also during the end of pregnancy, besides labor. Still, the former is strongly affected by noise so that even R-peak detection (which is essential for fetal heart-rate evaluations and subsequent processing procedures) is challenging. Some fetal studies have applied the Pan-Tompkins' algorithm that, however, was originally designed for adult applications. Thus, this work evaluated the Pan-Tompkins' algorithm suitability for fetal applications, and proposed fetal adjustments and optimizations to improve it. Both Pan-Tompkins' algorithm and its improved version were applied to the "Abdominal and Direct Fetal Electrocardiogram Database" and to the "Noninvasive Fetal Electrocardiography Database" of Physionet. R-peak detection accuracy was quantified by computation of positive-predictive value, sensitivity and F1 score. When applied to "Abdominal and Direct Fetal Electrocardiogram Database", the accuracy of the improved fetal Pan-Tompkins' algorithm was significantly higher than the standard (positive-predictive value: 0.94 vs. 0.79; sensitivity: 0.95 vs. 0.80; F1 score: 0.94 vs. 0.79; P<0.05 in all cases) on indirect fetal electrocardiograms, whereas both methods performed similarly on direct fetal electrocardiograms (positive-predictive value, sensitivity and F1 score all close to 1). Improved fetal Pan-Tompkins' algorithm was found to be superior to the standard also when applied to "Noninvasive Fetal Electrocardiography Database" (positive-predictive value: 0.68 vs. 0.55, P<0.05; sensitivity: 0.56 vs. 0.46, P=0.23; F1 score: 0.60 vs. 0.47, P=0.11). In indirect fetal electrocardiographic applications, improved fetal Pan-Tompkins' algorithm is to be preferred over the standard, since it provides higher R-peak detection accuracy for heart-rate evaluations and subsequent processing.

  1. Application of Gauss algorithm and Monte Carlo simulation to the identification of aquifer parameters

    Science.gov (United States)

    Durbin, Timothy J.

    1983-01-01

    The Gauss optimization technique can be used to identify the parameters of a model of a groundwater system for which the parameter identification problem is formulated as a least squares comparison between the response of the prototype and the response of the model. Unavoidable uncertainty in the true stress on the prototype and in the true response of the prototype to that stress will introduce errors into the parameter identification problem. A method for evaluating errors in the predictions of future water levels due to errors in recharge estimates was demonstrated. The method involves a Monte Carlo simulation of the parameter identification problem and of the prediction problem. The steps in the method are: (1) to prescribe the distribution of the recharge estimates; (2) to use this distribution to generate random sets of recharge estimates; (3) to use the Gauss optimization technique to identify the corresponding set of parameter estimates for each set of recharge estimates; (4) to make the corresponding set of hydraulic head predictions for each set of parameter estimates; and (5) to examine the distribution of hydraulic head predictions and to draw appropriate conclusions. Similarly, the method can be used independently or simultaneously to estimate the effect on hydraulic head predictions of errors in the measured water levels that are used in the parameter identification problem. The fit of the model to the data that are used to identify parameters is not a good indicator of these errors. A Monte Carlo simulation of the parameter identification problem can be used, however, to evaluate the effects on water level predictions of errors in the recharge (and pumpage) data used in the parameter identification problem. (Lantz-PTT)

  2. Blind source identification from the multichannel surface electromyogram

    International Nuclear Information System (INIS)

    Holobar, A; Farina, D

    2014-01-01

    The spinal circuitries combine the information flow from the supraspinal centers with the afferent input to generate the neural codes that drive the human skeletal muscles. The muscles transform the neural drive they receive from alpha motor neurons into motor unit action potentials (electrical activity) and force. Thus, the output of the spinal cord circuitries can be examined noninvasively by measuring the electrical activity of skeletal muscles at the surface of the skin i.e. the surface electromyogram (EMG). The recorded multi-muscle EMG activity pattern is generated by mixing processes of neural sources that need to be identified from the recorded signals themselves, with minimal or no a priori information available. Recently, multichannel source separation techniques that rely minimally on a priori knowledge of the mixing process have been developed and successfully applied to surface EMG. They act at different scales of information extraction to identify: (a) the activation signals shared by synergistic skeletal muscles, (b) the specific neural activation of individual muscles, separating it from that of nearby muscles i.e. from crosstalk, and (c) the spike trains of the active motor neurons. This review discusses the assumptions made by these methods, the challenges and limitations, as well as examples of their current applications. (topical review)

  3. Identification of rheological properties of human body surface tissue.

    Science.gov (United States)

    Benevicius, Vincas; Gaidys, Rimvydas; Ostasevicius, Vytautas; Marozas, Vaidotas

    2014-04-11

    According to World Health Organization obesity is one of the greatest public health challenges of the 21st century. It has tripled since the 1980s and the numbers of those affected continue to rise at an alarming rate, especially among children. There are number of devices that act as a prevention measure to boost person's motivation for physical activity and its levels. The placement of these devices is not restricted thus the measurement errors that appear because of the body rheology, clothes, etc. cannot be eliminated. The main objective of this work is to introduce a tool that can be applied directly to process measured accelerations so human body surface tissue induced errors can be reduced. Both the modeling and experimental techniques are proposed to identify body tissue rheological properties and prelate them to body mass index. Multi-level computational model composed from measurement device model and human body surface tissue rheological model is developed. Human body surface tissue induced inaccuracies can increase the magnitude of measured accelerations up to 34% when accelerations of the magnitude of up to 27 m/s(2) are measured. Although the timeframe of those disruptions are short - up to 0.2 s - they still result in increased overall measurement error. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. A possibilistic approach for transient identification with 'don't know' response capability optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos S. de; Schirru, Roberto; Pereira, Claudio M.N.A.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    This work describes a possibilistic approach for transient identification based on the minimum centroids set method, proposed in previous work, optimized by genetic algorithm. The idea behind this method is to split the complex classification problem into small and simple ones, so that the performance in the classification can be increased. In order to accomplish that, a genetic algorithm is used to learn, from realistic simulated data, the optimized time partitions, which the robustness and correctness in the classification are maximized. The use of a possibilistic classification approach propitiates natural and consistent classification rules, leading naturally to a good heuristic to handle the 'don't know 'response, in case of unrecognized transient, which is fairly desirable in transient classification systems where safety is critical. Application of the proposed approach to a nuclear transient indentification problem reveals good capability of the genetic algorithm in learning optimized possibilistic classification rules for efficient diagnosis including 'don't know' response. Obtained results are shown and commented. (author)

  5. Complete Inverse Method Using Ant Colony Optimization Algorithm for Structural Parameters and Excitation Identification from Output Only Measurements

    Directory of Open Access Journals (Sweden)

    Jun Chen

    2014-01-01

    Full Text Available In vibration-based structural health monitoring of existing large civil structures, it is difficult, sometimes even impossible, to measure the actual excitation applied to structures. Therefore, an identification method using output-only measurements is crucial for the practical application of structural health monitoring. This paper integrates the ant colony optimization (ACO algorithm into the framework of the complete inverse method to simultaneously identify unknown structural parameters and input time history using output-only measurements. The complete inverse method, which was previously suggested by the authors, converts physical or spatial information of the unknown input into the objective function of an optimization problem that can be solved by the ACO algorithm. ACO is a newly developed swarm computation method that has a very good performance in solving complex global continuous optimization problems. The principles and implementation procedure of the ACO algorithm are first introduced followed by an introduction of the framework of the complete inverse method. Construction of the objective function is then described in detail with an emphasis on the common situation wherein a limited number of actuators are installed on some key locations of the structure. Applicability and feasibility of the proposed method were validated by numerical examples and experimental results from a three-story building model.

  6. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  7. Self-Identification Algorithm for the Autonomous Control of Lateral Vibration in Flexible Rotors

    Directory of Open Access Journals (Sweden)

    Thiago Malta Buttini

    2012-01-01

    the shaft. For that, frequency response functions of the system are automatically identified experimentally by the algorithm. It is demonstrated that regions of stable gains can be easily plotted, and the most suitable gains can be found to minimize the resonant peak of the system in an autonomous way, without human intervention.

  8. Identification of Water Diffusivity of Inorganic Porous Materials Using Evolutionary Algorithms

    Czech Academy of Sciences Publication Activity Database

    Kočí, J.; Maděra, J.; Jerman, M.; Keppert, M.; Svora, Petr; Černý, R.

    2016-01-01

    Roč. 113, č. 1 (2016), s. 51-66 ISSN 0169-3913 Institutional support: RVO:61388980 Keywords : Evolutionary algorithms * Water transport * Inorganic porous materials * Inverse analysis Subject RIV: CA - Inorganic Chemistry Impact factor: 2.205, year: 2016

  9. Identification of Protein Complexes Using Weighted PageRank-Nibble Algorithm and Core-Attachment Structure.

    Science.gov (United States)

    Peng, Wei; Wang, Jianxin; Zhao, Bihai; Wang, Lusheng

    2015-01-01

    Protein complexes play a significant role in understanding the underlying mechanism of most cellular functions. Recently, many researchers have explored computational methods to identify protein complexes from protein-protein interaction (PPI) networks. One group of researchers focus on detecting local dense subgraphs which correspond to protein complexes by considering local neighbors. The drawback of this kind of approach is that the global information of the networks is ignored. Some methods such as Markov Clustering algorithm (MCL), PageRank-Nibble are proposed to find protein complexes based on random walk technique which can exploit the global structure of networks. However, these methods ignore the inherent core-attachment structure of protein complexes and treat adjacent node equally. In this paper, we design a weighted PageRank-Nibble algorithm which assigns each adjacent node with different probability, and propose a novel method named WPNCA to detect protein complex from PPI networks by using weighted PageRank-Nibble algorithm and core-attachment structure. Firstly, WPNCA partitions the PPI networks into multiple dense clusters by using weighted PageRank-Nibble algorithm. Then the cores of these clusters are detected and the rest of proteins in the clusters will be selected as attachments to form the final predicted protein complexes. The experiments on yeast data show that WPNCA outperforms the existing methods in terms of both accuracy and p-value. The software for WPNCA is available at "http://netlab.csu.edu.cn/bioinfomatics/weipeng/WPNCA/download.html".

  10. Identification of time-varying nonlinear systems using differential evolution algorithm

    DEFF Research Database (Denmark)

    Perisic, Nevena; Green, Peter L; Worden, Keith

    2013-01-01

    inclusion of new data within an error metric. This paper presents results of identification of a time-varying SDOF system with Coulomb friction using simulated noise-free and noisy data for the case of time-varying friction coefficient, stiffness and damping. The obtained results are promising and the focus...

  11. Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm

    Directory of Open Access Journals (Sweden)

    Daeho Jang

    2015-09-01

    Full Text Available The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air, the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.

  12. Characterization and noninvasive diagnosis of bladder cancer with serum surface enhanced Raman spectroscopy and genetic algorithms

    Science.gov (United States)

    Li, Shaoxin; Li, Linfang; Zeng, Qiuyao; Zhang, Yanjiao; Guo, Zhouyi; Liu, Zhiming; Jin, Mei; Su, Chengkang; Lin, Lin; Xu, Junfa; Liu, Songhao

    2015-05-01

    This study aims to characterize and classify serum surface-enhanced Raman spectroscopy (SERS) spectra between bladder cancer patients and normal volunteers by genetic algorithms (GAs) combined with linear discriminate analysis (LDA). Two group serum SERS spectra excited with nanoparticles are collected from healthy volunteers (n = 36) and bladder cancer patients (n = 55). Six diagnostic Raman bands in the regions of 481-486, 682-687, 1018-1034, 1313-1323, 1450-1459 and 1582-1587 cm-1 related to proteins, nucleic acids and lipids are picked out with the GAs and LDA. By the diagnostic models built with the identified six Raman bands, the improved diagnostic sensitivity of 90.9% and specificity of 100% were acquired for classifying bladder cancer patients from normal serum SERS spectra. The results are superior to the sensitivity of 74.6% and specificity of 97.2% obtained with principal component analysis by the same serum SERS spectra dataset. Receiver operating characteristic (ROC) curves further confirmed the efficiency of diagnostic algorithm based on GA-LDA technique. This exploratory work demonstrates that the serum SERS associated with GA-LDA technique has enormous potential to characterize and non-invasively detect bladder cancer through peripheral blood.

  13. Identification of aggregates for Tennessee bituminous surface courses

    Science.gov (United States)

    Sauter, Heather Jean

    Tennessee road construction is a major venue for federal and state spending. Tax dollars each year go to the maintenance and construction of roads. One aspect of highway construction that affects the public is the safety of its state roads. There are many factors that affect the safety of a given road. One factor that was focused on in this research was the polish resistance capabilities of aggregates. Several pre-evaluation methods have been used in the laboratory to predict what will happen in a field situation. A new pre-evaluation method was invented that utilized AASHTO T 304 procedure upscaled to accommodate surface bituminous aggregates. This new method, called the Tennessee Terminal Textural Condition Method (T3CM), was approved by Tennessee Department of Transportation to be used as a pre-evaluation method on bituminous surface courses. It was proven to be operator insensitive, repeatable, and an accurate indication of particle shape and texture. Further research was needed to correlate pre-evaluation methods to the current field method, ASTM E 274-85 Locked Wheel Skid Trailer. In this research, twenty-five in-place bituminous projects and eight source evaluations were investigated. The information gathered would further validate the T3CM and find the pre-evaluation method that best predicted the field method. In addition, new sources of aggregates for bituminous surface courses were revealed. The results of this research have shown T3CM to be highly repeatable with an overall coefficient of variation of 0.26% for an eight sample repeatability test. It was the best correlated pre-evaluation method with the locked wheel skid trailer method giving an R2 value of 0.3946 and a Pearson coefficient of 0.710. Being able to predict field performance of aggregates prior to construction is a powerful tool capable of saving time, money, labor, and possibly lives.

  14. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    Science.gov (United States)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  15. Application of response surface methodology (RSM) and genetic algorithm in minimizing warpage on side arm

    Science.gov (United States)

    Raimee, N. A.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    The plastic injection moulding process produces large numbers of parts of high quality with great accuracy and quickly. It has widely used for production of plastic part with various shapes and geometries. Side arm is one of the product using injection moulding to manufacture it. However, there are some difficulties in adjusting the parameter variables which are mould temperature, melt temperature, packing pressure, packing time and cooling time as there are warpage happen at the tip part of side arm. Therefore, the work reported herein is about minimizing warpage on side arm product by optimizing the process parameter using Response Surface Methodology (RSM) and with additional artificial intelligence (AI) method which is Genetic Algorithm (GA).

  16. Simulation of Gravity Wave Propagation in Free Surface Flows by an Incompressible SPH Algorithm

    International Nuclear Information System (INIS)

    Amanifard, N.; Mahnama, S. M.; Neshaei, S. A. L.; Mehrdad, M. A.; Farahani, M. H.

    2012-01-01

    This paper presents an incompressible smoothed particle hydrodynamics model to simulate wave propagation in a free surface flow. The Navier-Stokes equations are solved in a Lagrangian framework using a three-step fractional method. In the first step, a temporary velocity field is provided according to the relevant body forces. This velocity field is renewed in the second step to include the viscosity effects. A Poisson equation is employed in the third step as an alternative for the equation of state in order to evaluate pressure. This Poisson equation considers a trade-off between density and pressure which is utilized in the third step to impose the incompressibility effect. The computations are compared with the experimental as well as numerical data and a good agreement is observed. In order to validate proposed algorithm, a dam-break problem is solved as a benchmark solution and the computational results are compared with the previous numerical ones.

  17. A STUDY OF DEAD-RECKONING ALGORITHM FOR MECANUM WHEEL BASED MOBILE ROBOT UNDER VARIOUS TYPES OF ROAD SURFACE

    OpenAIRE

    上町, 亮介; KAMMACHI, Ryosuke

    2015-01-01

    In this paper, we describe about a study of dead-reckoning algorithm for mecanum wheel based mobile robot under various types of road surface. Because of mecanum wheel based mobile robot can move omni-direction by utilizing tire-road surface friction. Therefore depending on road surface condition, it is difficult to estimate accurate self-position by applying conventional dead-reckoning method. In order to overcome inaccuracy of conventional dead-reckoning method for mecanum wheel based mobil...

  18. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.

    2010-01-01

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  19. A sparse structure learning algorithm for Gaussian Bayesian Network identification from high-dimensional data.

    Science.gov (United States)

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2013-06-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.

  20. DEMON-type algorithms for determination of hydro-acoustic signatures of surface ships and of divers

    Science.gov (United States)

    Slamnoiu, G.; Radu, O.; Rosca, V.; Pascu, C.; Damian, R.; Surdu, G.; Curca, E.; Radulescu, A.

    2016-08-01

    With the project “System for detection, localization, tracking and identification of risk factors for strategic importance in littoral areas”, developed in the National Programme II, the members of the research consortium intend to develop a functional model for a hydroacoustic passive subsystem for determination of acoustic signatures of targets such as fast boats and autonomous divers. This paper presents some of the results obtained in the area of hydroacoustic signal processing by using DEMON-type algorithms (Detection of Envelope Modulation On Noise). For evaluation of the performance of various algorithm variations we have used both audio recordings of the underwater noise generated by ships and divers in real situations and also simulated noises. We have analysed the results of processing these signals using four DEMON algorithm structures as presented in the reference literature and a fifth DEMON algorithm structure proposed by the authors of this paper. The algorithm proposed by the authors generates similar results to those obtained by applying the traditional algorithms but requires less computing resources than those and at the same time it has proven to be more resilient to random noise influence.

  1. Surface-enhanced resonance Raman spectroscopy as an identification tool in column liquid chromatography

    NARCIS (Netherlands)

    Seifar, R.M.; Altelaar, M.A.F.; Dijkstra, R.J.; Ariese, F.; Brinkman, U.A.T.; Gooijer, C.

    2000-01-01

    The compatibility of ion-pair reversed-phase column liquid chromatography and surface-enhanced resonance Raman spectroscopy (SERRS) for separation and identification of anionic dyes has been investigated, with emphasis on the at-line coupling via a thin-layer chromatography (TLC) plate. SERR spectra

  2. Acquisition and visualization of cross section surface characteristics for identification of archaeological ceramics

    NARCIS (Netherlands)

    Boon, Paul; Pont, Sylvia C.; van Oortmerssen, Gert J.M.

    2007-01-01

    This paper describes a new system for digitizing ceramic fabric reference collections and a preliminary evaluation of its applicability to archaeological ceramics identification. An important feature in the analysis of ceramic fabrics is the surface texture of the fresh cross section. Visibility of

  3. 6 CFR 37.17 - Requirements for the surface of the driver's license or identification card.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Requirements for the surface of the driver's license or identification card. 37.17 Section 37.17 Domestic Security DEPARTMENT OF HOMELAND SECURITY... specifically ISO/IEC 19794-5:2005(E) Information technology—Biometric Data Interchange Formats—Part 5: Face...

  4. A physics-based algorithm for retrieving land-surface emissivity and temperature from EOS/MODIS data

    International Nuclear Information System (INIS)

    Wan, Z.; Li, Z.L.

    1997-01-01

    The authors have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NEΔT) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4--0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10--12.5 microm IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2--3 K

  5. Hybrid of Natural Element Method (NEM with Genetic Algorithm (GA to find critical slip surface

    Directory of Open Access Journals (Sweden)

    Shahriar Shahrokhabadi

    2014-06-01

    Full Text Available One of the most important issues in geotechnical engineering is the slope stability analysis for determination of the factor of safety and the probable slip surface. Finite Element Method (FEM is well suited for numerical study of advanced geotechnical problems. However, mesh requirements of FEM creates some difficulties for solution processing in certain problems. Recently, motivated by these limitations, several new Meshfree methods such as Natural Element Method (NEM have been used to analyze engineering problems. This paper presents advantages of using NEM in 2D slope stability analysis and Genetic Algorithm (GA optimization to determine the probable slip surface and the related factor of safety. The stress field is produced under plane strain condition using natural element formulation to simulate material behavior analysis utilized in conjunction with a conventional limit equilibrium method. In order to justify the preciseness and convergence of the proposed method, two kinds of examples, homogenous and non-homogenous, are conducted and results are compared with FEM and conventional limit equilibrium methods. The results show the robustness of the NEM in slope stability analysis.

  6. A full waveform tomography algorithm for teleseismic body and surface waves in 2.5 dimensions

    Science.gov (United States)

    Baker, B.; Roecker, S.

    2014-09-01

    We describe a 2.5-D, frequency domain, viscoelastic waveform tomography algorithm for imaging with seismograms of teleseismic body and surface waves recorded by quasi-linear arrays. The equations of motion are discretized with p-adaptive finite elements that allow for geometric flexibility and accurate solutions as a function of wavelength. Artificial forces are introduced into the media by specifying a known wavefield along the model edges and solving for the corresponding scattered field. Because of the relatively low frequency content of teleseismic data, regional scale tectonic settings can be parametrized with a modest number of variables and perturbations can be determined directly from a regularized Gauss-Newton system of equations. Waveforms generated by the forward problem compare well with analytic solutions for simple 1-D and 2-D media. Tests of different approaches to the inverse problem show that the use of an approximate Hessian serves to properly focus the scattered field. We also find that while full waveform inversion can provide significantly better resolution than standard techniques for both body and surface wave tomography modelled individually, joint inversion both enhances resolution and mitigates potential artefacts.

  7. Soft tissue freezing process. Identification of the dual-phase lag model parameters using the evolutionary algorithm

    Science.gov (United States)

    Mochnacki, Bohdan; Majchrzak, Ewa; Paruch, Marek

    2018-01-01

    In the paper the soft tissue freezing process is considered. The tissue sub-domain is subjected to the action of cylindrical cryoprobe. Thermal processes proceeding in the domain considered are described using the dual-phase lag equation (DPLE) supplemented by the appropriate boundary and initial conditions. DPLE results from the generalization of the Fourier law in which two lag times are introduced (relaxation and thermalization times). The aim of research is the identification of these parameters on the basis of measured cooling curves at the set of points selected from the tissue domain. To solve the problem the evolutionary algorithms are used. The paper contains the mathematical model of the tissue freezing process, the very short information concerning the numerical solution of the basic problem, the description of the inverse problem solution and the results of computations.

  8. Identification of isomers and control of ionization and dissociation processes using dual-mass-spectrometer scheme and genetic algorithm optimization

    International Nuclear Information System (INIS)

    Chen Zhou; Qiu-Nan Tong; Zhang Cong-Cong; Hu Zhan

    2015-01-01

    Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. (paper)

  9. Identification of the Rayleigh surface waves for estimation of viscoelasticity using the surface wave elastography technique.

    Science.gov (United States)

    Zhang, Xiaoming

    2016-11-01

    The purpose of this Letter to the Editor is to demonstrate an effective method for estimating viscoelasticity based on measurements of the Rayleigh surface wave speed. It is important to identify the surface wave mode for measuring surface wave speed. A concept of start frequency of surface waves is proposed. The surface wave speeds above the start frequency should be used to estimate the viscoelasticity of tissue. The motivation was to develop a noninvasive surface wave elastography (SWE) technique for assessing skin disease by measuring skin viscoelastic properties. Using an optical based SWE system, the author generated a local harmonic vibration on the surface of phantom using an electromechanical shaker and measured the resulting surface waves on the phantom using an optical vibrometer system. The surface wave speed was measured using a phase gradient method. It was shown that different standing wave modes were generated below the start frequency because of wave reflection. However, the pure symmetric surface waves were generated from the excitation above the start frequency. Using the wave speed dispersion above the start frequency, the viscoelasticity of the phantom can be correctly estimated.

  10. THE IDENTIFICATION ALGORITHM OF METROLOGICAL CHARACTERISTICS OF WIDE-RANGE PHOTOVOLTAIC SEMICONDUCTOR CONVERTERS WITH MULTIPLY IMPURITIES

    Directory of Open Access Journals (Sweden)

    O. K. Gusev

    2011-01-01

    Full Text Available Metrological features of photovoltaic semiconductor converters (PSC based on semiconductors with the multiple-charge impurities are investigated in a wide range of power densities of optical radiation. The algorithm of the measurement procedure of the metrological characteristics of PSC is introduced not only at low densities of optical power, but at high, taking into account the boundary of nonlinear recombination. The estimation of accuracy of feature finding of the metrological characteristics PSC based on semiconductors with the multiple-charge impurities, is carried out, taking into consideration the area of nonlinear recombination. 

  11. Particle identification at LHCb: new calibration techniques and machine learning classification algorithms

    CERN Document Server

    CERN. Geneva

    2018-01-01

    Particle identification (PID) plays a crucial role in LHCb analyses. Combining information from LHCb subdetectors allows one to distinguish between various species of long-lived charged and neutral particles. PID performance directly affects the sensitivity of most LHCb measurements. Advanced multivariate approaches are used at LHCb to obtain the best PID performance and control systematic uncertainties. This talk highlights recent developments in PID that use innovative machine learning techniques, as well as novel data-driven approaches which ensure that PID performance is well reproduced in simulation.

  12. THERAPEUTIC EYELIDS HYGIENE IN THE ALGORITHMS OF PREVENTION AND TREATMENT OF OCULAR SURFACE DISEASES. PART II

    Directory of Open Access Journals (Sweden)

    V. N. Trubilin

    2016-01-01

    problem of modern ophthalmology.Part 1 — Trubilin VN, Poluninа EG, Kurenkov VV, Kapkova SG, Markova EY, Therapeutic eyelids hygiene in the algorithms of prevention and treatment of ocular surface diseases. Ophthalmology in Russia. 2016;13(2:122–127 doi: 10.18008/1816–5095– 2016–2–122–127

  13. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    Science.gov (United States)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  14. Fuel spill identification by gas chromatography -- genetic algorithms/pattern recognition techniques

    International Nuclear Information System (INIS)

    Lavine, B.K.; Moores, A.J.; Faruque, A.

    1998-01-01

    Gas chromatography and pattern recognition methods were used to develop a potential method for typing jet fuels so a spill sample in the environment can be traced to its source. The test data consisted of 256 gas chromatograms of neat jet fuels. 31 fuels that have undergone weathering in a subsurface environment were correctly identified by type using discriminants developed from the gas chromatograms of the neat jet fuels. Coalescing poorly resolved peaks, which occurred during preprocessing, diminished the resolution and hence information content of the GC profiles. Nevertheless a genetic algorithm was able to extract enough information from these profiles to correctly classify the chromatograms of weathered fuels. This suggests that cheaper and simpler GC instruments ca be used to type jet fuels

  15. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  16. Online Identification of Multivariable Discrete Time Delay Systems Using a Recursive Least Square Algorithm

    Directory of Open Access Journals (Sweden)

    Saïda Bedoui

    2013-01-01

    Full Text Available This paper addresses the problem of simultaneous identification of linear discrete time delay multivariable systems. This problem involves both the estimation of the time delays and the dynamic parameters matrices. In fact, we suggest a new formulation of this problem allowing defining the time delay and the dynamic parameters in the same estimated vector and building the corresponding observation vector. Then, we use this formulation to propose a new method to identify the time delays and the parameters of these systems using the least square approach. Convergence conditions and statistics properties of the proposed method are also developed. Simulation results are presented to illustrate the performance of the proposed method. An application of the developed approach to compact disc player arm is also suggested in order to validate simulation results.

  17. Numerical thermal analysis and optimization of multi-chip LED module using response surface methodology and genetic algorithm

    NARCIS (Netherlands)

    Tang, Hong Yu; Ye, Huai Yu; Chen, Xian Ping; Qian, Cheng; Fan, Xue Jun; Zhang, G.Q.

    2017-01-01

    In this paper, the heat transfer performance of the multi-chip (MC) LED module is investigated numerically by using a general analytical solution. The configuration of the module is optimized with genetic algorithm (GA) combined with a response surface methodology. The space between chips, the

  18. Optimization of artificial neural network models through genetic algorithms for surface ozone concentration forecasting.

    Science.gov (United States)

    Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G

    2012-09-01

    This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.

  19. An integrated study of surface roughness in EDM process using regression analysis and GSO algorithm

    Science.gov (United States)

    Zainal, Nurezayana; Zain, Azlan Mohd; Sharif, Safian; Nuzly Abdull Hamed, Haza; Mohamad Yusuf, Suhaila

    2017-09-01

    The aim of this study is to develop an integrated study of surface roughness (Ra) in the die-sinking electrical discharge machining (EDM) process of Ti-6AL-4V titanium alloy with positive polarity of copper-tungsten (Cu-W) electrode. Regression analysis and glowworm swarm optimization (GSO) algorithm were considered for modelling and optimization process. Pulse on time (A), pulse off time (B), peak current (C) and servo voltage (D) were selected as the machining parameters with various levels. The experiments have been conducted based on the two levels of full factorial design with an added center point design of experiments (DOE). Moreover, mathematical models with linear and 2 factor interaction (2FI) effects of the parameters chosen were developed. The validity test of the fit and the adequacy of the developed mathematical models have been carried out by using analysis of variance (ANOVA) and F-test. The statistical analysis showed that the 2FI model outperformed with the most minimal value of Ra compared to the linear model and experimental result.

  20. A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver

    KAUST Repository

    Liu, Yang

    2015-10-26

    © 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT)-based surface integral equation (SIE) solvers, it reduces the computational and memory costs of transient analysis from equation and equation to equation and equation, respectively, where Nt and Ns denote the number of temporal and spatial unknowns (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). In the past, PWTD-accelerated MOT-SIE solvers have been applied to transient problems involving half million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). Recently, a scalable parallel PWTD-accelerated MOT-SIE solver that leverages a hiearchical parallelization strategy has been developed and successfully applied to the transient problems involving ten million spatial unknowns (Liu et. al., in URSI Digest, 2013). We further enhanced the capabilities of this solver by implementing a compression scheme based on local cosine wavelet bases (LCBs) that exploits the sparsity in the temporal dimension (Liu et. al., in URSI Digest, 2014). Specifically, the LCB compression scheme was used to reduce the memory requirement of the PWTD ray data and computational cost of operations in the PWTD translation stage.

  1. Surface protein composition of Aeromonas hydrophila strains virulent for fish: identification of a surface array protein

    Energy Technology Data Exchange (ETDEWEB)

    Dooley, J.S.G.; Trust, T.J.

    1988-02-01

    The surface protein composition of members of a serogroup of Aeromonas hydrophila was examined. Immunoblotting with antiserum raised against formalinized whole cells of A. hydrophila TF7 showed a 52K S-layer protein to be the major surface protein antigen, and impermeant Sulfo-NHS-Biotin cell surface labeling showed that the 52K S-layer protein was the only protein accessible to the Sulfo-NHS-Biotin label and effectively masked underlying outer membrane (OM) proteins. In its native surface conformation the 52K S-layer protein was only weakly reactive with a lactoperoxidase /sup 125/I surface iodination procedure. A UV-induced rough lipopolysaccharide (LPS) mutant of TF7 was found to produce an intact S layer, but a deep rough LPS mutant was unable to maintain an array on the cell surface and excreted the S-layer protein into the growth medium, indicating that a minimum LPS oligosaccharide size required for A. hydrophila S-layer anchoring. The native S layer was permeable to /sup 125/I in the lactoperoxidase radiolabeling procedure, and two major OM proteins of molecular weights 30,000 and 48,000 were iodinated. The 48K species was a peptidoglycan-associated, transmembrane protein which exhibited heat-modifiable SDS solubilization behavior characteristic of a porin protein. A 50K major peptidoglycan-associated OM protein which was not radiolabeled exhibited similar SDS heat modification characteristics and possibly represents a second porin protein.

  2. Surface protein composition of Aeromonas hydrophila strains virulent for fish: identification of a surface array protein

    International Nuclear Information System (INIS)

    Dooley, J.S.G.; Trust, T.J.

    1988-01-01

    The surface protein composition of members of a serogroup of Aeromonas hydrophila was examined. Immunoblotting with antiserum raised against formalinized whole cells of A. hydrophila TF7 showed a 52K S-layer protein to be the major surface protein antigen, and impermeant Sulfo-NHS-Biotin cell surface labeling showed that the 52K S-layer protein was the only protein accessible to the Sulfo-NHS-Biotin label and effectively masked underlying outer membrane (OM) proteins. In its native surface conformation the 52K S-layer protein was only weakly reactive with a lactoperoxidase 125 I surface iodination procedure. A UV-induced rough lipopolysaccharide (LPS) mutant of TF7 was found to produce an intact S layer, but a deep rough LPS mutant was unable to maintain an array on the cell surface and excreted the S-layer protein into the growth medium, indicating that a minimum LPS oligosaccharide size required for A. hydrophila S-layer anchoring. The native S layer was permeable to 125 I in the lactoperoxidase radiolabeling procedure, and two major OM proteins of molecular weights 30,000 and 48,000 were iodinated. The 48K species was a peptidoglycan-associated, transmembrane protein which exhibited heat-modifiable SDS solubilization behavior characteristic of a porin protein. A 50K major peptidoglycan-associated OM protein which was not radiolabeled exhibited similar SDS heat modification characteristics and possibly represents a second porin protein

  3. Identification of genetic interaction networks via an evolutionary algorithm evolved Bayesian network.

    Science.gov (United States)

    Li, Ruowang; Dudek, Scott M; Kim, Dokyoon; Hall, Molly A; Bradford, Yuki; Peissig, Peggy L; Brilliant, Murray H; Linneman, James G; McCarty, Catherine A; Bao, Le; Ritchie, Marylyn D

    2016-01-01

    The future of medicine is moving towards the phase of precision medicine, with the goal to prevent and treat diseases by taking inter-individual variability into account. A large part of the variability lies in our genetic makeup. With the fast paced improvement of high-throughput methods for genome sequencing, a tremendous amount of genetics data have already been generated. The next hurdle for precision medicine is to have sufficient computational tools for analyzing large sets of data. Genome-Wide Association Studies (GWAS) have been the primary method to assess the relationship between single nucleotide polymorphisms (SNPs) and disease traits. While GWAS is sufficient in finding individual SNPs with strong main effects, it does not capture potential interactions among multiple SNPs. In many traits, a large proportion of variation remain unexplained by using main effects alone, leaving the door open for exploring the role of genetic interactions. However, identifying genetic interactions in large-scale genomics data poses a challenge even for modern computing. For this study, we present a new algorithm, Grammatical Evolution Bayesian Network (GEBN) that utilizes Bayesian Networks to identify interactions in the data, and at the same time, uses an evolutionary algorithm to reduce the computational cost associated with network optimization. GEBN excelled in simulation studies where the data contained main effects and interaction effects. We also applied GEBN to a Type 2 diabetes (T2D) dataset obtained from the Marshfield Personalized Medicine Research Project (PMRP). We were able to identify genetic interactions for T2D cases and controls and use information from those interactions to classify T2D samples. We obtained an average testing area under the curve (AUC) of 86.8 %. We also identified several interacting genes such as INADL and LPP that are known to be associated with T2D. Developing the computational tools to explore genetic associations beyond main

  4. The CMS Level-1 Tau identification algorithm for the LHC Run II

    CERN Document Server

    AUTHOR|(CDS)2083962

    2016-01-01

    The CMS experiment implements a sophisticated two-level online selection system that achieves a rejection factor of nearly 10e5. The first level (L1) is based on coarse information coming from the calorimeters and the muon detectors while the High Level Trigger combines fine-grain information from all sub-detectors. During Run II, the centre of mass energy of the LHC collisions will be increased up to 13/14 TeV and the instantaneous luminosity will eventually reach 2e34 cm-2s-1. To guarantee a successful and ambitious physics program under this intense environment, the CMS Trigger and Data acquisition system must be consolidated. In particular, the L1 calorimeter Trigger hardware and architecture will be upgraded, benefiting from the recent microTCA technology allowing sophisticated algorithms to be deployed, better exploiting the calorimeter granularity and opening the possibility of making correlations between different parts of the detector. Given the enhanced granularity provided by the new system, an opt...

  5. Identification of Differentially Expressed Genes between Original Breast Cancer and Xenograft Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Deling Wang

    2018-03-01

    Full Text Available Breast cancer is one of the most common malignancies in women. Patient-derived tumor xenograft (PDX model is a cutting-edge approach for drug research on breast cancer. However, PDX still exhibits differences from original human tumors, thereby challenging the molecular understanding of tumorigenesis. In particular, gene expression changes after tissues are transplanted from human to mouse model. In this study, we propose a novel computational method by incorporating several machine learning algorithms, including Monte Carlo feature selection (MCFS, random forest (RF, and rough set-based rule learning, to identify genes with significant expression differences between PDX and original human tumors. First, 831 breast tumors, including 657 PDX and 174 human tumors, were collected. Based on MCFS and RF, 32 genes were then identified to be informative for the prediction of PDX and human tumors and can be used to construct a prediction model. The prediction model exhibits a Matthews coefficient correlation value of 0.777. Seven interpretable interactions within the informative gene were detected based on the rough set-based rule learning. Furthermore, the seven interpretable interactions can be well supported by previous experimental studies. Our study not only presents a method for identifying informative genes with differential expression but also provides insights into the mechanism through which gene expression changes after being transplanted from human tumor into mouse model. This work would be helpful for research and drug development for breast cancer.

  6. Identification of Differentially Expressed Genes between Original Breast Cancer and Xenograft Using Machine Learning Algorithms.

    Science.gov (United States)

    Wang, Deling; Li, Jia-Rui; Zhang, Yu-Hang; Chen, Lei; Huang, Tao; Cai, Yu-Dong

    2018-03-12

    Breast cancer is one of the most common malignancies in women. Patient-derived tumor xenograft (PDX) model is a cutting-edge approach for drug research on breast cancer. However, PDX still exhibits differences from original human tumors, thereby challenging the molecular understanding of tumorigenesis. In particular, gene expression changes after tissues are transplanted from human to mouse model. In this study, we propose a novel computational method by incorporating several machine learning algorithms, including Monte Carlo feature selection (MCFS), random forest (RF), and rough set-based rule learning, to identify genes with significant expression differences between PDX and original human tumors. First, 831 breast tumors, including 657 PDX and 174 human tumors, were collected. Based on MCFS and RF, 32 genes were then identified to be informative for the prediction of PDX and human tumors and can be used to construct a prediction model. The prediction model exhibits a Matthews coefficient correlation value of 0.777. Seven interpretable interactions within the informative gene were detected based on the rough set-based rule learning. Furthermore, the seven interpretable interactions can be well supported by previous experimental studies. Our study not only presents a method for identifying informative genes with differential expression but also provides insights into the mechanism through which gene expression changes after being transplanted from human tumor into mouse model. This work would be helpful for research and drug development for breast cancer.

  7. Identification of Optimal Path in Power System Network Using Bellman Ford Algorithm

    Directory of Open Access Journals (Sweden)

    S. Hemalatha

    2012-01-01

    Full Text Available Power system network can undergo outages during which there may be a partial or total blackout in the system. In that condition, transmission of power through the optimal path is an important problem in the process of reconfiguration of power system components. For a given set of generation, load pair, there could be many possible paths to transmit the power. The optimal path needs to consider the shortest path (minimum losses, capacity of the transmission line, voltage stability, priority of loads, and power balance between the generation and demand. In this paper, the Bellman Ford Algorithm (BFA is applied to find out the optimal path and also the several alternative paths by considering all the constraints. In order to demonstrate the capability of BFA, it has been applied to a practical 230 kV network. This restorative path search guidance tool is quite efficient in finding the optimal and also the alternate paths for transmitting the power from a generating station to demand.

  8. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    Science.gov (United States)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

  9. Identification of faint central stars in extended, low-surface-brightness planetary nebulae

    International Nuclear Information System (INIS)

    Kwitter, K.B.; Lydon, T.J.; Jacoby, G.H.

    1988-01-01

    As part of a larger program to study the properties of planetary nebula central stars, a search for faint central stars in extended, low-surface-brightness planetary nebulae using CCD imaging is performed. Of 25 target nebulae, central star candidates have been identified in 17, with certainties ranging from extremely probable to possible. Observed V values in the central star candidates extend to fainter than 23 mag. The identifications are presented along with the resulting photometric measurements. 24 references

  10. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, Andrea [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Sen, Ramazan Sonat [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  11. Identification of candidate sites for a near surface repository for radioactive waste

    International Nuclear Information System (INIS)

    Motiejunas, S.

    2004-01-01

    This Report comprises results of the area survey stage, which involves regional screening to define the regions of interest and identification of potential sites within suitable regions. The main goal was to define a few sites potentially suitable for constructing of the near surface repository. It was concluded that a vicinity of Ignalina NPP is among the best suitable regions for the near surface repository. At the present investigation level a ridge in Galilauke village has the most favorable conditions. However, Apvardai site is potentially suitable for the repository too

  12. Protein social behavior makes a stronger signal for partner identification than surface geometry

    Science.gov (United States)

    Laine, Elodie

    2016-01-01

    ABSTRACT Cells are interactive living systems where proteins movements, interactions and regulation are substantially free from centralized management. How protein physico‐chemical and geometrical properties determine who interact with whom remains far from fully understood. We show that characterizing how a protein behaves with many potential interactors in a complete cross‐docking study leads to a sharp identification of its cellular/true/native partner(s). We define a sociability index, or S‐index, reflecting whether a protein likes or not to pair with other proteins. Formally, we propose a suitable normalization function that accounts for protein sociability and we combine it with a simple interface‐based (ranking) score to discriminate partners from non‐interactors. We show that sociability is an important factor and that the normalization permits to reach a much higher discriminative power than shape complementarity docking scores. The social effect is also observed with more sophisticated docking algorithms. Docking conformations are evaluated using experimental binding sites. These latter approximate in the best possible way binding sites predictions, which have reached high accuracy in recent years. This makes our analysis helpful for a global understanding of partner identification and for suggesting discriminating strategies. These results contradict previous findings claiming the partner identification problem being solvable solely with geometrical docking. Proteins 2016; 85:137–154. © 2016 Wiley Periodicals, Inc. PMID:27802579

  13. Identification and real time control of current profile in Tore-supra: algorithms and simulation; Identification et controle en temps reel du profil de courant dans Tore Supra: algorithmes et simulations

    Energy Technology Data Exchange (ETDEWEB)

    Houy, P

    1999-10-15

    The aim of this work is to propose a real-time control of the current profile in order to achieve reproducible operating modes with improved energetic confinement in tokamaks. The determination of the profile is based on measurements given by interferometry and polarimetry diagnostics. Different ways to evaluate and improve the accuracy of these measurements are exposed. The position and the shape of a plasma are controlled by the poloidal system that forces them to cope with standard values. Gas or neutral ions or ice pellet or extra power injection are technical means used to control other plasma parameters. These controls are performed by servo-controlled loops. The poloidal system of Tore-supra is presented. The main obstacle to a reliable determination of the current profile is the fact that slightly different Faraday angles lead to very different profiles. The direct identification method that is exposed in this work, gives the profile that minimizes the square of the margin between measured and computed values. The different algorithms proposed to control current profiles on Tore-supra have been validated by using a plasma simulation. The code Cronos that solves the resistive diffusion equation of current has been used. (A.C.)

  14. An efficient and robust algorithm for parallel groupwise registration of bone surfaces

    NARCIS (Netherlands)

    van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.

    2012-01-01

    In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm

  15. A Case Study on Maximizing Aqua Feed Pellet Properties Using Response Surface Methodology and Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tumuluru, Jaya

    2013-01-10

    Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio, barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated

  16. Dynamic Water Surface Detection Algorithm Applied on PROBA-V Multispectral Data

    Directory of Open Access Journals (Sweden)

    Luc Bertels

    2016-12-01

    Full Text Available Water body detection worldwide using spaceborne remote sensing is a challenging task. A global scale multi-temporal and multi-spectral image analysis method for water body detection was developed. The PROBA-V microsatellite has been fully operational since December 2013 and delivers daily near-global synthesis with a spatial resolution of 1 km and 333 m. The Red, Near-InfRared (NIR and Short Wave InfRared (SWIR bands of the atmospherically corrected 10-day synthesis images are first Hue, Saturation and Value (HSV color transformed and subsequently used in a decision tree classification for water body detection. To minimize commission errors four additional data layers are used: the Normalized Difference Vegetation Index (NDVI, Water Body Potential Mask (WBPM, Permanent Glacier Mask (PGM and Volcanic Soil Mask (VSM. Threshold values on the hue and value bands, expressed by a parabolic function, are used to detect the water bodies. Beside the water bodies layer, a quality layer, based on the water bodies occurrences, is available in the output product. The performance of the Water Bodies Detection Algorithm (WBDA was assessed using Landsat 8 scenes over 15 regions selected worldwide. A mean Commission Error (CE of 1.5% was obtained while a mean Omission Error (OE of 15.4% was obtained for minimum Water Surface Ratio (WSR = 0.5 and drops to 9.8% for minimum WSR = 0.6. Here, WSR is defined as the fraction of the PROBA-V pixel covered by water as derived from high spatial resolution images, e.g., Landsat 8. Both the CE = 1.5% and OE = 9.8% (WSR = 0.6 fall within the user requirements of 15%. The WBDA is fully operational in the Copernicus Global Land Service and products are freely available.

  17. Real-time intelligent pattern recognition algorithm for surface EMG signals

    Directory of Open Access Journals (Sweden)

    Jahed Mehran

    2007-12-01

    Full Text Available Abstract Background Electromyography (EMG is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP and least mean square (LMS is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD and time-frequency representation (TFR. Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal

  18. A pilot study of a heuristic algorithm for novel template identification from VA electronic medical record text.

    Science.gov (United States)

    Redd, Andrew M; Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Tran, Le-Thuy; Samore, Matthew H

    2017-07-01

    Templates in text notes pose challenges for automated information extraction algorithms. We propose a method that identifies novel templates in plain text medical notes. The identification can then be used to either include or exclude templates when processing notes for information extraction. The two-module method is based on the framework of information foraging and addresses the hypothesis that documents containing templates and the templates within those documents can be identified by common features. The first module takes documents from the corpus and groups those with common templates. This is accomplished through a binned word count hierarchical clustering algorithm. The second module extracts the templates. It uses the groupings and performs a longest common subsequence (LCS) algorithm to obtain the constituent parts of the templates. The method was developed and tested on a random document corpus of 750 notes derived from a large database of US Department of Veterans Affairs (VA) electronic medical notes. The grouping module, using hierarchical clustering, identified 23 groups with 3 documents or more, consisting of 120 documents from the 750 documents in our test corpus. Of these, 18 groups had at least one common template that was present in all documents in the group for a positive predictive value of 78%. The LCS extraction module performed with 100% positive predictive value, 94% sensitivity, and 83% negative predictive value. The human review determined that in 4 groups the template covered the entire document, with the remaining 14 groups containing a common section template. Among documents with templates, the number of templates per document ranged from 1 to 14. The mean and median number of templates per group was 5.9 and 5, respectively. The grouping method was successful in finding like documents containing templates. Of the groups of documents containing templates, the LCS module was successful in deciphering text belonging to the template

  19. An Improved Mono-Window Algorithm for Land Surface Temperature Retrieval from Landsat 8 Thermal Infrared Sensor Data

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2015-04-01

    Full Text Available The successful launch of the Landsat 8 satellite with two thermal infrared bands on February 11, 2013, for continuous Earth observation provided another opportunity for remote sensing of land surface temperature (LST. However, calibration notices issued by the United States Geological Survey (USGS indicated that data from the Landsat 8 Thermal Infrared Sensor (TIRS Band 11 have large uncertainty and suggested using TIRS Band 10 data as a single spectral band for LST estimation. In this study, we presented an improved mono-window (IMW algorithm for LST retrieval from the Landsat 8 TIRS Band 10 data. Three essential parameters (ground emissivity, atmospheric transmittance and effective mean atmospheric temperature were required for the IMW algorithm to retrieve LST. A new method was proposed to estimate the parameter of effective mean atmospheric temperature from local meteorological data. The other two essential parameters could be both estimated through the so-called land cover approach. Sensitivity analysis conducted for the IMW algorithm revealed that the possible error in estimating the required atmospheric water vapor content has the most significant impact on the probable LST estimation error. Under moderate errors in both water vapor content and ground emissivity, the algorithm had an accuracy of ~1.4 K for LST retrieval. Validation of the IMW algorithm using the simulated datasets for various situations indicated that the LST difference between the retrieved and the simulated ones was 0.67 K on average, with an RMSE of 0.43 K. Comparison of our IMW algorithm with the single-channel (SC algorithm for three main atmosphere profiles indicated that the average error and RMSE of the IMW algorithm were −0.05 K and 0.84 K, respectively, which were less than the −2.86 K and 1.05 K of the SC algorithm. Application of the IMW algorithm to Nanjing and its vicinity in east China resulted in a reasonable LST estimation for the region. Spatial

  20. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    Science.gov (United States)

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  1. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID of Penguins.

    Directory of Open Access Journals (Sweden)

    Vsevolod Afanasyev

    Full Text Available A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  2. Localization of accessory pathway in patients with wolff-parkinson-white syndrome from surface ecg using arruda algorithm

    International Nuclear Information System (INIS)

    Saidullah, S.; Shah, B.

    2016-01-01

    Background: To ablate accessory pathway successfully and conveniently, accurate localization of the pathway is needed. Electrophysiologists use different algorithms before taking the patients to the electrophysiology (EP) laboratory to plan the intervention accordingly. In this study, we used Arruda algorithm to locate the accessory pathway. The objective of the study was to determine the accuracy of the Arruda algorithm for locating the pathway on surface ECG. Methods: It was a cross-sectional observational study conducted from January 2014 to January 2016 in the electrophysiology department of Hayat Abad Medical Complex Peshawar Pakistan. A total of fifty nine (n=59) consecutive patients of both genders between age 14-60 years presented with WPW syndrome (Symptomatic tachycardia with delta wave on surface ECG) were included in the study. Patient's electrocardiogram (ECG) before taking patients to laboratory was analysed on Arruda algorithm. Standard four wires protocol was used for EP study before ablation. Once the findings were confirmed the pathway was ablated as per standard guidelines. Results: A total of fifty nine (n=59) patients between the age 14-60 years were included in the study. Cumulative mean age was 31.5 years ± 12.5 SD. There were 56.4% (n=31) males with mean age 28.2 years ± 10.2 SD and 43.6% (n=24) were females with mean age 35.9 years ± 14.0 SD. Arruda algorithm was found to be accurate in predicting the exact accessory pathway (AP) in 83.6% (n=46) cases. Among all inaccurate predictions (n=9), Arruda inaccurately predicted two third (n=6; 66.7%) pathways towards right side (right posteroseptal, right posterolateral and right antrolateral). Conclusion: Arruda algorithm was found highly accurate in predicting accessory pathway before ablation. (author)

  3. Rapid and Direct VHH and Target Identification by Staphylococcal Surface Display Libraries.

    Science.gov (United States)

    Cavallari, Marco

    2017-07-12

    Unbiased and simultaneous identification of a specific antibody and its target antigen has been difficult without prior knowledge of at least one interaction partner. Immunization with complex mixtures of antigens such as whole organisms and tissue extracts including tumoral ones evokes a highly diverse immune response. During such a response, antibodies are generated against a variety of epitopes in the mixture. Here, we propose a surface display design that is suited to simultaneously identify camelid single domain antibodies and their targets. Immune libraries of single-domain antigen recognition fragments from camelid heavy chain-only antibodies (VHH) were attached to the peptidoglycan of Gram-positive Staphylococcus aureus employing its endogenous housekeeping sortase enzyme. The sortase transpeptidation reaction covalently attached the VHH to the bacterial peptidoglycan. The reversible nature of the reaction allowed the recovery of the VHH from the bacterial surface and the use of the VHH in downstream applications. These staphylococcal surface display libraries were used to rapidly identify VHH as well as their targets by immunoprecipitation (IP). Our novel bacterial surface display platform was stable under harsh screening conditions, allowed fast target identification, and readily permitted the recovery of the displayed VHH for downstream analysis.

  4. Rapid and Direct VHH and Target Identification by Staphylococcal Surface Display Libraries

    Directory of Open Access Journals (Sweden)

    Marco Cavallari

    2017-07-01

    Full Text Available Unbiased and simultaneous identification of a specific antibody and its target antigen has been difficult without prior knowledge of at least one interaction partner. Immunization with complex mixtures of antigens such as whole organisms and tissue extracts including tumoral ones evokes a highly diverse immune response. During such a response, antibodies are generated against a variety of epitopes in the mixture. Here, we propose a surface display design that is suited to simultaneously identify camelid single domain antibodies and their targets. Immune libraries of single-domain antigen recognition fragments from camelid heavy chain-only antibodies (VHH were attached to the peptidoglycan of Gram-positive Staphylococcus aureus employing its endogenous housekeeping sortase enzyme. The sortase transpeptidation reaction covalently attached the VHH to the bacterial peptidoglycan. The reversible nature of the reaction allowed the recovery of the VHH from the bacterial surface and the use of the VHH in downstream applications. These staphylococcal surface display libraries were used to rapidly identify VHH as well as their targets by immunoprecipitation (IP. Our novel bacterial surface display platform was stable under harsh screening conditions, allowed fast target identification, and readily permitted the recovery of the displayed VHH for downstream analysis.

  5. Genus- and species-level identification of dermatophyte fungi by surface-enhanced Raman spectroscopy

    Science.gov (United States)

    Witkowska, Evelin; Jagielski, Tomasz; Kamińska, Agnieszka

    2018-03-01

    This paper demonstrates that surface-enhanced Raman spectroscopy (SERS) coupled with principal component analysis (PCA) can serve as a fast and reliable technique for detection and identification of dermatophyte fungi at both genus and species level. Dermatophyte infections are the most common mycotic diseases worldwide, affecting a quarter of the human population. Currently, there is no optimal method for detection and identification of fungal diseases, as each has certain limitations. Here, for the first time, we have achieved with a high accuracy, differentiation of dermatophytes representing three major genera, i.e. Trichophyton, Microsporum, and Epidermophyton. Two first principal components (PC), namely PC-1 and PC-2, gave together 97% of total variance. Additionally, species-level identification within the Trichophyton genus has been performed. PC-1 and PC-2, which are the most diagnostically significant, explain 98% of the variance in the data obtained from spectra of: Trichophyton rubrum, Trichophyton menatgrophytes, Trichophyton interdigitale and Trichophyton tonsurans. This study offers a new diagnostic approach for the identification of dermatophytes. Being fast, reliable and cost-effective, it has the potential to be incorporated in the clinical practice to improve diagnostics of medically important fungi.

  6. [Quantitative analysis of thiram by surface-enhanced raman spectroscopy combined with feature extraction Algorithms].

    Science.gov (United States)

    Zhang, Bao-hua; Jiang, Yong-cheng; Sha, Wen; Zhang, Xian-yi; Cui, Zhi-feng

    2015-02-01

    Three feature extraction algorithms, such as the principal component analysis (PCA), the discrete cosine transform (DCT) and the non-negative factorization (NMF), were used to extract the main information of the spectral data in order to weaken the influence of the spectral fluctuation on the subsequent quantitative analysis results based on the SERS spectra of the pesticide thiram. Then the extracted components were respectively combined with the linear regression algorithm--the partial least square regression (PLSR) and the non-linear regression algorithm--the support vector machine regression (SVR) to develop the quantitative analysis models. Finally, the effect of the different feature extraction algorithms on the different kinds of the regression algorithms was evaluated by using 5-fold cross-validation method. The experiments demonstrate that the analysis results of SVR are better than PLSR for the non-linear relationship between the intensity of the SERS spectrum and the concentration of the analyte. Further, the feature extraction algorithms can significantly improve the analysis results regardless of the regression algorithms which mainly due to extracting the main information of the source spectral data and eliminating the fluctuation. Additionally, PCA performs best on the linear regression model and NMF is best on the non-linear model, and the predictive error can be reduced nearly three times in the best case. The root mean square error of cross-validation of the best regression model (NMF+SVR) is 0.0455 micormol x L(-1) (10(-6) mol x L(-1)), and it attains the national detection limit of thiram, so the method in this study provides a novel method for the fast detection of thiram. In conclusion, the study provides the experimental references the selecting the feature extraction algorithms on the analysis of the SERS spectrum, and some common findings of feature extraction can also help processing of other kinds of spectroscopy.

  7. Smell identification of spices using nanomechanical membrane-type surface stress sensors

    Science.gov (United States)

    Imamura, Gaku; Shiba, Kota; Yoshikawa, Genki

    2016-11-01

    Artificial olfaction, that is, a chemical sensor system that identifies samples by smell, has not been fully achieved because of the complex perceptional mechanism of olfaction. To realize an artificial olfactory system, not only an array of chemical sensors but also a valid feature extraction method is required. In this study, we achieved the identification of spices by smell using nanomechanical membrane-type surface stress sensors (MSS). Features were extracted from the sensing signals obtained from four MSS coated with different types of polymers, focusing on the chemical interactions between polymers and odor molecules. The principal component analysis (PCA) of the dataset consisting of the extracted parameters demonstrated the separation of each spice on the scatter plot. We discuss the strategy for improving odor identification based on the relationship between the results of PCA and the chemical species in the odors.

  8. Algorithm for Recovery of Integrated Water Vapor Content in the Atmosphere over Land Surfaces Based on Satellite Spectroradiometer Data

    Science.gov (United States)

    Lisenko, S. A.

    2017-05-01

    An algorithm is proposed for making charts of the distribution of water vapor in the atmosphere based on multispectral images of the earth by the Ocean and Land Color Instrument (OLCI) on board of the European research satellite Sentinel-3. The algorithm is based on multiple regression fits of the spectral brightness coefficients at the upper boundary of the atmosphere, the geometric parameters of the satellite location (solar and viewing angles), and the total water vapor content in the atmosphere. A regression equation is derived from experimental data on the variation in the optical characteristics of the atmosphere and underlying surface, together with Monte-Carlo calculations of the radiative transfer characteristics. The equation includes the brightness coefficients in the near IR channels of the OLCI for the absorption bands of water vapor and oxygen, as well as for the transparency windows of the atmosphere. Together these make it possible to eliminate the effect of the reflection spectrum of the underlying surface and air pressure on the accuracy of the measurements. The algorithm is tested using data from a prototype OLCI, the medium resolution imaging spectrometer (MERIS). A sample chart of the distribution of water vapor in the atmosphere over Eastern Europe is constructed without using subsatellite data and digital models of the surface relief. The water vapor contents in the atmosphere determined using MERIS images and data provided by earthbound measurements with the aerosol robotic network (AERONET) are compared with a mean square deviation of 1.24 kg/m2.

  9. Identification of linear features at geothermal field based on Segment Tracing Algorithm (STA) of the ALOS PALSAR data

    Science.gov (United States)

    Haeruddin; Saepuloh, A.; Heriawan, M. N.; Kubo, T.

    2016-09-01

    Indonesia has about 40% of geothermal energy resources in the world. An area with the potential geothermal energy in Indonesia is Wayang Windu located at West Java Province. The comprehensive understanding about the geothermal system in this area is indispensable for continuing the development. A geothermal system generally associated with joints or fractures and served as the paths for the geothermal fluid migrating to the surface. The fluid paths are identified by the existence of surface manifestations such as fumaroles, solfatara and the presence of alteration minerals. Therefore the analyses of the liner features to geological structures are crucial for identifying geothermal potential. Fractures or joints in the form of geological structures are associated with the linear features in the satellite images. The Segment Tracing Algorithm (STA) was used for the basis to determine the linear features. In this study, we used satellite images of ALOS PALSAR in Ascending and Descending orbit modes. The linear features obtained by satellite images could be validated by field observations. Based on the application of STA to the ALOS PALSAR data, the general direction of extracted linear features were detected in WNW-ESE, NNE-SSW and NNW-SSE. The directions are consistent with the general direction of faults system in the field. The linear features extracted from ALOS PALSAR data based on STA were very useful to identify the fractured zones at geothermal field.

  10. Generation of synthetic surface electromyography signals under fatigue conditions for varying force inputs using feedback control algorithm.

    Science.gov (United States)

    Venugopal, G; Deepak, P; Ghosh, Diptasree M; Ramakrishnan, S

    2017-11-01

    Surface electromyography is a non-invasive technique used for recording the electrical activity of neuromuscular systems. These signals are random, complex and multi-component. There are several techniques to extract information about the force exerted by muscles during any activity. This work attempts to generate surface electromyography signals for various magnitudes of force under isometric non-fatigue and fatigue conditions using a feedback model. The model is based on existing current distribution, volume conductor relations, the feedback control algorithm for rate coding and generation of firing pattern. The result shows that synthetic surface electromyography signals are highly complex in both non-fatigue and fatigue conditions. Furthermore, surface electromyography signals have higher amplitude and lower frequency under fatigue condition. This model can be used to study the influence of various signal parameters under fatigue and non-fatigue conditions.

  11. A Specified Procedure for Distress Identification and Assessment for Urban Road Surfaces Based on PCI

    Directory of Open Access Journals (Sweden)

    Giuseppe Loprencipe

    2017-04-01

    Full Text Available In this paper, a simplified procedure for the assessment of pavement structural integrity and the level of service for urban road surfaces is presented. A sample of 109 Asphalt Concrete (AC urban pavements of an Italian road network was considered to validate the methodology. As part of this research, the most recurrent defects, those never encountered and those not defined with respect to the list collected in the ASTM D6433 have been determined by statistical analysis. The goal of this research is the improvement of the ASTM D6433 Distress Identification Catalogue to be adapted to urban road surfaces. The presented methodology includes the implementation of a Visual Basic for Application (VBA language-based program for the computerization of Pavement Condition Index (PCI calculation with interpolation by the parametric cubic spline of all of the density/deduct value curves of ASTM D6433 distress types. Also, two new distress definitions (for manholes and for tree roots and new density/deduct curve values were proposed to achieve a new distress identification manual for urban road pavements. To validate the presented methodology, for the 109 urban pavements considered, the PCI was calculated using the new distress catalogue and using the ASTM D6433 implemented on PAVERTM. The results of the linear regression between them and their statistical parameters are presented in this paper. The comparison of the results shows that the proposed method is suitable for the identification and assessment of observed distress in urban pavement surfaces at the PCI-based scale.

  12. Identification card and codification of the chemical and morphological characteristics of 14 dental implant surfaces.

    Science.gov (United States)

    Dohan Ehrenfest, David M; Vazquez, Lydia; Park, Yeong-Joon; Sammartino, Gilberto; Bernard, Jean-Pierre

    2011-10-01

    Dental implants are commonly used in daily practice; however, most surgeons do not really know the characteristics of these biomedical devices they are placing in their patients. The objective of this work is to describe the chemical and morphological characteristics of 14 implant surfaces available on the market and to establish a simple and clear identification (ID) card for all of them, following the classification procedure developed in the Dohan Ehrenfest et al (2010) Codification (DEC) system. Fourteen implant surfaces were characterized: TiUnite (Nobel Biocare), Ospol (Ospol), Kohno HRPS (Sweden & Martina), Osseospeed (AstraTech), Ankylos (Dentsply Friadent), MTX (Zimmer), Promote (Camlog), BTI Interna (Biotechnology Institute), EVL Plus (SERF), Twinkon Ref (Tekka), Ossean (Intra-Lock), NanoTite (Biomet 3I), SLActive (ITI Straumann), Integra-CP/NanoTite (Bicon). Three samples of each implant were analyzed. Superficial chemical composition was analyzed using X-ray photoelectron spectroscopy/electron spectroscopy for chemical analysis, and the 100 nm in-depth profile was established using Auger electron spectroscopy. The microtopography was quantified using light interferometry. The general morphology and nanotopography were evaluated using a field emission-scanning electron microscope. Finally, the characterization code of each surface was established using the DEC system, and the main characteristics of each surface were summarized in a reader-friendly ID card. From a chemical standpoint, of the 14 different surfaces, 10 were based on a commercially pure titanium (grade 2 or 4), 3 on a titanium-aluminum alloy (grade 5 titanium), and one on a calcium phosphate core. Nine surfaces presented different forms of chemical impregnation or discontinuous coating of the titanium core, and 3 surfaces were covered with residual aluminablasting particles. Twelve surfaces presented different degrees of inorganic pollutions, and 2 presented a severe organic pollution

  13. A Situational-Awareness System For Networked Infantry Including An Accelerometer-Based Shot-Identification Algorithm For Direct-Fire Weapons

    Science.gov (United States)

    2016-09-01

    algorithm to compete with Google Earth for system resources. Laboratory development of the code and initial system testing occurred on a Dell XPS desktop...collected from inertial sensors attached to Armalite Rifle 15 (AR15) variant weapons. In spite of under-sampling limitations, the shot-identification...sampling, under sampling, Euler angles, firing azimuth, YEI, GPS, Google Earth, mapping 15 . NUMBER OF PAGES 135 16. PRICE CODE 17. SECURITY

  14. Surface quality monitoring for process control by on-line vibration analysis using an adaptive spline wavelet algorithm

    Science.gov (United States)

    Luo, G. Y.; Osypiw, D.; Irle, M.

    2003-05-01

    The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.

  15. Crystal identification for a dual-layer-offset LYSO based PET system via Lu-176 background radiation and mean shift algorithm

    Science.gov (United States)

    Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang

    2018-01-01

    Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.

  16. Pattern recognition of concrete surface cracks and defects using integrated image processing algorithms

    Science.gov (United States)

    Balbin, Jessie R.; Hortinela, Carlos C.; Garcia, Ramon G.; Baylon, Sunnycille; Ignacio, Alexander Joshua; Rivera, Marco Antonio; Sebastian, Jaimie

    2017-06-01

    Pattern recognition of concrete surface crack defects is very important in determining stability of structure like building, roads or bridges. Surface crack is one of the subjects in inspection, diagnosis, and maintenance as well as life prediction for the safety of the structures. Traditionally determining defects and cracks on concrete surfaces are done manually by inspection. Moreover, any internal defects on the concrete would require destructive testing for detection. The researchers created an automated surface crack detection for concrete using image processing techniques including Hough transform, LoG weighted, Dilation, Grayscale, Canny Edge Detection and Haar Wavelet Transform. An automatic surface crack detection robot is designed to capture the concrete surface by sectoring method. Surface crack classification was done with the use of Haar trained cascade object detector that uses both positive samples and negative samples which proved that it is possible to effectively identify the surface crack defects.

  17. Aerodynamic Identification and Modeling of Generic UCAV Configurations with Control Surface Integration

    Science.gov (United States)

    Allen, Jacob Daniel

    As aircraft are increasingly specialized for low-observability and maneuverability, the aerodynamic identification process has become increasingly important. Recently, the aerodynamics of Unmanned Combat Aerial Vehicle (UCAV) configurations have been of interest. Two UCAV designs of the same planform were the subject of this research. Techniques for aerodynamic identification were explored using data generated by computational fluid dynamics (CFD). The Kestrel CFD solver was used to execute prescribed motion maneuvers, which simultaneously excite multiple flight parameters including inboard and outboard control surface deflection. The executed maneuvers are orthogonal Schroeder frequency sweeps covering reduced frequencies from 0.0069 to 0.075, superimposed with a linear Mach increase from 0.1 to 0.9. Quasi-steady aerodynamic models were developed for the longitudinal aerodynamic coefficients from the CFD maneuver data. These models are multivariate polynomial equations, developed by power series expansion of the terms of a traditional linear aerodynamic model. Additionally, a host of static, dynamic, and doublet CFD studies were completed to generate validation data to compare against the models. The models showed fairly accurate matching to the static validation data, and varied force and moment predictions of the doublet maneuvers. The Schroeder maneuver required less computational resources compared to similar aerodynamic identification using current CFD techniques. Overall, the presented methods identified the aerodynamics of two UCAV configurations over a large flight envelope with reasonable accuracy, and with a 36% cost savings compared to current techniques for static aerodynamic prediction. Animations of the Schroeder maneuvers are available with this thesis.

  18. Induced Voltages Ratio-Based Algorithm for Fault Detection, and Faulted Phase and Winding Identification of a Three-Winding Power Transformer

    Directory of Open Access Journals (Sweden)

    Byung Eun Lee

    2014-09-01

    Full Text Available This paper proposes an algorithm for fault detection, faulted phase and winding identification of a three-winding power transformer based on the induced voltages in the electrical power system. The ratio of the induced voltages of the primary-secondary, primary-tertiary and secondary-tertiary windings is the same as the corresponding turns ratio during normal operating conditions, magnetic inrush, and over-excitation. It differs from the turns ratio during an internal fault. For a single phase and a three-phase power transformer with wye-connected windings, the induced voltages of each pair of windings are estimated. For a three-phase power transformer with delta-connected windings, the induced voltage differences are estimated to use the line currents, because the delta winding currents are practically unavailable. Six detectors are suggested for fault detection. An additional three detectors and a rule for faulted phase and winding identification are presented as well. The proposed algorithm can not only detect an internal fault, but also identify the faulted phase and winding of a three-winding power transformer. The various test results with Electromagnetic Transients Program (EMTP-generated data show that the proposed algorithm successfully discriminates internal faults from normal operating conditions including magnetic inrush and over-excitation. This paper concludes by implementing the algorithm into a prototype relay based on a digital signal processor.

  19. A diagnostic assessment of evolutionary algorithms for multi-objective surface water reservoir control

    Science.gov (United States)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Herman, Jonathan D.; Giuliani, Matteo; Castelletti, Andrea

    2016-06-01

    Globally, the pressures of expanding populations, climate change, and increased energy demands are motivating significant investments in re-operationalizing existing reservoirs or designing operating policies for new ones. These challenges require an understanding of the tradeoffs that emerge across the complex suite of multi-sector demands in river basin systems. This study benchmarks our current capabilities to use Evolutionary Multi-Objective Direct Policy Search (EMODPS), a decision analytic framework in which reservoirs' candidate operating policies are represented using parameterized global approximators (e.g., radial basis functions) then those parameterized functions are optimized using multi-objective evolutionary algorithms to discover the Pareto approximate operating policies. We contribute a comprehensive diagnostic assessment of modern MOEAs' abilities to support EMODPS using the Conowingo reservoir in the Lower Susquehanna River Basin, Pennsylvania, USA. Our diagnostic results highlight that EMODPS can be very challenging for some modern MOEAs and that epsilon dominance, time-continuation, and auto-adaptive search are helpful for attaining high levels of performance. The ɛ-MOEA, the auto-adaptive Borg MOEA, and ɛ-NSGAII all yielded superior results for the six-objective Lower Susquehanna benchmarking test case. The top algorithms show low sensitivity to different MOEA parameterization choices and high algorithmic reliability in attaining consistent results for different random MOEA trials. Overall, EMODPS poses a promising method for discovering key reservoir management tradeoffs; however algorithmic choice remains a key concern for problems of increasing complexity.

  20. Extending the inverse scattering series free-surface-multiple-elimination algorithm by accommodating the source property on data with interfering or proximal seismic events

    Science.gov (United States)

    Zhao, Lei; Yang, Jinlong; Weglein, Arthur B.

    2017-12-01

    The inverse scattering series free-surface-multiple-elimination (FSME) algorithm is modified and extended to accommodate the source property-source radiation pattern. That accommodation can provide additional value for the fidelity of the free-surface multiple predictions. The new extended FSME algorithm retains all the merits of the original algorithm, i.e., fully data-driven and with a requirement of no subsurface information. It is tested on a one-dimensional acoustic model with proximal and interfering seismic events, such as interfering primaries and multiples. The results indicate the new extended FSME algorithm can predict more accurate free-surface multiples than methods without the accommodation of the source property if the source has a radiation pattern. This increased effectiveness in prediction contributes to removing free-surface multiples without damaging primaries. It is important in such cases to increase predictive effectiveness because other prediction methods, such as the surface-related-multiple-elimination algorithm, has difficulties and problems in prediction accuracy, and those issues affect efforts to remove multiples through adaptive subtraction. Therefore accommodation of the source property can not only improve the effectiveness of the FSME algorithm, but also extend the method beyond the current algorithm (e.g. improving the internal multiple attenuation algorithm).

  1. Effect of reconstruction algorithm on image quality and identification of ground-glass opacities and partly solid nodules on low-dose thin-section CT: Experimental study using chest phantom

    International Nuclear Information System (INIS)

    Koyama, Hisanobu; Ohno, Yoshiharu; Kono, Atsushi A.; Kusaka, Akiko; Konishi, Minoru; Yoshii, Masaru; Sugimura, Kazuro

    2010-01-01

    Purpose: The purpose of this study was to assess the influence of reconstruction algorithm on identification and image quality of ground-glass opacities (GGOs) and partly solid nodules on low-dose thin-section CT. Materials and methods: A chest CT phantom including simulated GGOs and partly solid nodules was scanned with five different tube currents and reconstructed by using standard (A) and newly developed (B) high-resolution reconstruction algorithms, followed by visually assessment of identification and image quality of GGOs and partly solid nodules by two chest radiologists. Inter-observer agreement, ROC analysis and ANOVA were performed to compare identification and image quality of each data set with those of the standard reference. The standard reference used 120 mA s in conjunction with reconstruction algorithm A. Results: Kappa values (κ) of overall identification and image qualities were substantial or almost perfect (0.60 < κ). Assessment of identification showed that area under the curve of 25 mA reconstructed with reconstruction algorithm A was significantly lower than that of standard reference (p < 0.05), while assessment of image quality indicated that 50 mA s reconstructed with reconstruction algorithm A and 25 mA s reconstructed with both reconstruction algorithms were significantly lower than standard reference (p < 0.05). Conclusion: Reconstruction algorithm may be an important factor for identification and image quality of ground-glass opacities and partly solid nodules on low-dose CT examination.

  2. Identification of Cell Surface Targets through Meta-analysis of Microarray Data

    Directory of Open Access Journals (Sweden)

    Henry Haeberle

    2012-07-01

    Full Text Available High-resolution image guidance for resection of residual tumor cells would enable more precise and complete excision for more effective treatment of cancers, such as medulloblastoma, the most common pediatric brain cancer. Numerous studies have shown that brain tumor patient outcomes correlate with the precision of resection. To enable guided resection with molecular specificity and cellular resolution, molecular probes that effectively delineate brain tumor boundaries are essential. Therefore, we developed a bioinformatics approach to analyze micro-array datasets for the identification of transcripts that encode candidate cell surface biomarkers that are highly enriched in medulloblastoma. The results identified 380 genes with greater than a two-fold increase in the expression in the medulloblastoma compared with that in the normal cerebellum. To enrich for targets with accessibility for extracellular molecular probes, we further refined this list by filtering it with gene ontology to identify genes with protein localization on, or within, the plasma membrane. To validate this meta-analysis, the top 10 candidates were evaluated with immunohistochemistry. We identified two targets, fibrillin 2 and EphA3, which specifically stain medulloblastoma. These results demonstrate a novel bioinformatics approach that successfully identified cell surface and extracellular candidate markers enriched in medulloblastoma versus adjacent cerebellum. These two proteins are high-value targets for the development of tumor-specific probes in medulloblastoma. This bioinformatics method has broad utility for the identification of accessible molecular targets in a variety of cancers and will enable probe development for guided resection.

  3. A Torque Error Compensation Algorithm for Surface Mounted Permanent Magnet Synchronous Machines with Respect to Magnet Temperature Variations

    Directory of Open Access Journals (Sweden)

    Chang-Seok Park

    2017-09-01

    Full Text Available This paper presents a torque error compensation algorithm for a surface mounted permanent magnet synchronous machine (SPMSM through real time permanent magnet (PM flux linkage estimation at various temperature conditions from medium to rated speed. As known, the PM flux linkage in SPMSMs varies with the thermal conditions. Since a maximum torque per ampere look up table, a control method used for copper loss minimization, is developed based on estimated PM flux linkage, variation of PM flux linkage results in undesired torque development of SPMSM drives. In this paper, PM flux linkage is estimated through a stator flux linkage observer and the torque error is compensated in real time using the estimated PM flux linkage. In this paper, the proposed torque error compensation algorithm is verified in simulation and experiment.

  4. Identification of novel adhesins of M. tuberculosis H37Rv using integrated approach of multiple computational algorithms and experimental analysis.

    Directory of Open Access Journals (Sweden)

    Sanjiv Kumar

    Full Text Available Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA, Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase and Rv0309 (L,D-transpeptidase bind to fibronectin and laminin. We report Rv2599 (membrane protein, Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.

  5. Identification of Spurious Signals from Permeable Ffowcs Williams and Hawkings Surfaces

    Science.gov (United States)

    Lopes, Leonard V.; Boyd, David D., Jr.; Nark, Douglas M.; Wiedemann, Karl E.

    2017-01-01

    Integral forms of the permeable surface formulation of the Ffowcs Williams and Hawkings (FW-H) equation often require an input in the form of a near field Computational Fluid Dynamics (CFD) solution to predict noise in the near or far field from various types of geometries. The FW-H equation involves three source terms; two surface terms (monopole and dipole) and a volume term (quadrupole). Many solutions to the FW-H equation, such as several of Farassat's formulations, neglect the quadrupole term. Neglecting the quadrupole term in permeable surface formulations leads to inaccuracies called spurious signals. This paper explores the concept of spurious signals, explains how they are generated by specifying the acoustic and hydrodynamic surface properties individually, and provides methods to determine their presence, regardless of whether a correction algorithm is employed. A potential approach based on the equivalent sources method (ESM) and the sensitivity of Formulation 1A (Formulation S1A) is also discussed for the removal of spurious signals.

  6. Identification of surface species by vibrational normal mode analysis. A DFT study

    Science.gov (United States)

    Zhao, Zhi-Jian; Genest, Alexander; Rösch, Notker

    2017-10-01

    Infrared spectroscopy is an important experimental tool for identifying molecular species adsorbed on a metal surface that can be used in situ. Often vibrational modes in such IR spectra of surface species are assigned and identified by comparison with vibrational spectra of related (molecular) compounds of known structure, e. g., an organometallic cluster analogue. To check the validity of this strategy, we carried out a computational study where we compared the normal modes of three C2Hx species (x = 3, 4) in two types of systems, as adsorbates on the Pt(111) surface and as ligands in an organometallic cluster compound. The results of our DFT calculations reproduce the experimental observed frequencies with deviations of at most 50 cm-1. However, the frequencies of the C2Hx species in both types of systems have to be interpreted with due caution if the coordination mode is unknown. The comparative identification strategy works satisfactorily when the coordination mode of the molecular species (ethylidyne) is similar on the surface and in the metal cluster. However, large shifts are encountered when the molecular species (vinyl) exhibits different coordination modes on both types of substrates.

  7. Fuzzy surfaces in GIS and geographical analysis theory, analytical methods, algorithms and applications

    CERN Document Server

    Lodwick, Weldon

    2007-01-01

    Surfaces are a central to geographical analysis. Their generation and manipulation are a key component of geographical information systems (GISs). However, geographical surface data is often not precise. When surfaces are used to model geographical entities, the data inherently contains uncertainty in terms of both position and attribute. Fuzzy Surface in GIS and Geographical Analysis sets out a process to identify the uncertainty in geographic entities. It describes how to successfully obtain, model, analyze, and display data, as well as interpret results within the context of GIS. Focusing on uncertainty that arises from transitional boundaries, the book limits its study to three types of uncertainties: intervals, fuzzy sets, and possibility distributions. The book explains that uncertainty in geographical data typically stems from these three and it is only natural to incorporate them into the analysis and display of surface data. The book defines the mathematics associated with each method for analysis,...

  8. All-Weather Sounding of Moisture and Temperature From Microwave Sensors Using a Coupled Surface/Atmosphere Inversion Algorithm

    Science.gov (United States)

    Boukabara, S. A.; Garrett, K.

    2014-12-01

    A one-dimensional variational retrieval system has been developed, capable of producing temperature and water vapor profiles in clear, cloudy and precipitating conditions. The algorithm, known as the Microwave Integrated Retrieval System (MiRS), is currently running operationally at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS), and is applied to a variety of data from the AMSU-A/MHS sensors on board the NOAA-18, NOAA-19, and MetOp-A/B polar satellite platforms, as well as SSMI/S on board both DMSP F-16 and F18, and from the NPP ATMS sensor. MiRS inverts microwave brightness temperatures into atmospheric temperature and water vapor profiles, along with hydrometeors and surface parameters, simultaneously. This atmosphere/surface coupled inversion allows for more accurate retrievals in the lower tropospheric layers by accounting for the surface emissivity impact on the measurements. It also allows the inversion of the soundings in all-weather conditions thanks to the incorporation of the hydrometeors parameters in the inverted state vector as well as to the inclusion of the emissivity in the same state vector, which is accounted for dynamically for the highly variable surface conditions found under precipitating atmospheres. The inversion is constrained in precipitating conditions by the inclusion of covariances for hydrometeors, to take advantage of the natural correlations that exist between temperature and water vapor with liquid and ice cloud along with rain water. In this study, we present a full assessment of temperature and water vapor retrieval performances in all-weather conditions and over all surface types (ocean, sea-ice, land, and snow) using matchups with radiosonde as well as Numerical Weather Prediction and other satellite retrieval algorithms as references. An emphasis is placed on retrievals in cloudy and precipitating atmospheres, including extreme weather events

  9. A genetic algorithm-Bayesian network approach for the analysis of metabolomics and spectroscopic data: application to the rapid identification of Bacillus spores and classification of Bacillus species.

    Science.gov (United States)

    Correa, Elon; Goodacre, Royston

    2011-01-26

    The rapid identification of Bacillus spores and bacterial identification are paramount because of their implications in food poisoning, pathogenesis and their use as potential biowarfare agents. Many automated analytical techniques such as Curie-point pyrolysis mass spectrometry (Py-MS) have been used to identify bacterial spores giving use to large amounts of analytical data. This high number of features makes interpretation of the data extremely difficult We analysed Py-MS data from 36 different strains of aerobic endospore-forming bacteria encompassing seven different species. These bacteria were grown axenically on nutrient agar and vegetative biomass and spores were analyzed by Curie-point Py-MS. We develop a novel genetic algorithm-Bayesian network algorithm that accurately identifies sand selects a small subset of key relevant mass spectra (biomarkers) to be further analysed. Once identified, this subset of relevant biomarkers was then used to identify Bacillus spores successfully and to identify Bacillus species via a Bayesian network model specifically built for this reduced set of features. This final compact Bayesian network classification model is parsimonious, computationally fast to run and its graphical visualization allows easy interpretation of the probabilistic relationships among selected biomarkers. In addition, we compare the features selected by the genetic algorithm-Bayesian network approach with the features selected by partial least squares-discriminant analysis (PLS-DA). The classification accuracy results show that the set of features selected by the GA-BN is far superior to PLS-DA.

  10. Technical assessment of forest road network using Backmund and surface distribution algorithm in a hardwood forest of Hyrcanian zone

    Energy Technology Data Exchange (ETDEWEB)

    Parsakhoo, P.

    2016-07-01

    Aim of study: Corrected Backmund and Surface Distribution Algorithms (SDA) for analysis of forest road network are introduced and presented in this study. Research was carried out to compare road network performance between two districts in a hardwood forest. Area of study: Shast Kalateh forests, Iran. Materials and methods: In uncorrected Backmund algorithm, skidding distance was determined by calculating road density and spacing and then it was designed as Potential Area for Skidding Operations (PASO) in ArcGIS software. To correct this procedure, the skidding constraint areas were taken using GPS and then removed from PASO. In SDA, shortest perpendicular distance from geometrical center of timber compartments to road was measured at both districts. Main results: In corrected Backmund, forest openness in district I and II were 70.3% and 69.5%, respectively. Therefore, there was little difference in forest openness in the districts based on the uncorrected Backmund. In SDA, the mean distance from geometrical center of timber compartments to the roads of districts I and II were 199.45 and 149.31 meters, respectively. Forest road network distribution in district II was better than that of district I relating to SDA. Research highlights: It was concluded that uncorrected Backmund was not precise enough to assess forest road network, while corrected Backmund could exhibit a real PASO by removing skidding constraints. According to presented algorithms, forest road network performance in district II was better than district I. (Author)

  11. Technical assessment of forest road network using Backmund and surface distribution algorithm in a hardwood forest of Hyrcanian zone

    Directory of Open Access Journals (Sweden)

    Aidin Parsakhoo

    2016-07-01

    Full Text Available Aim of study: Corrected Backmund and Surface Distribution Algorithms (SDA for analysis of forest road network are introduced and presented in this study. Research was carried out to compare road network performance between two districts in a hardwood forest. Area of study: Shast Kalateh forests, Iran. Materials and methods: In uncorrected Backmund algorithm, skidding distance was determined by calculating road density and spacing and then it was designed as Potential Area for Skidding Operations (PASO in ArcGIS software. To correct this procedure, the skidding constraint areas were taken using GPS and then removed from PASO. In SDA, shortest perpendicular distance from geometrical center of timber compartments to road was measured at both districts. Main results: In corrected Backmund, forest openness in district I and II were 70.3% and 69.5%, respectively. Therefore, there was little difference in forest openness in the districts based on the uncorrected Backmund. In SDA, the mean distance from geometrical center of timber compartments to the roads of districts I and II were 199.45 and 149.31 meters, respectively. Forest road network distribution in district II was better than that of district I relating to SDA. Research highlights: It was concluded that uncorrected Backmund was not precise enough to assess forest road network, while corrected Backmund could exhibit a real PASO by removing skidding constraints. According to presented algorithms, forest road network performance in district II was better than district I.

  12. An algorithm to locate optimal bond breaking points on a potential energy surface for applications in mechanochemistry and catalysis

    Science.gov (United States)

    Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang

    2017-10-01

    The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ -function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.

  13. An Improved Response Surface Methodology Algorithm with an Application to Traffic Signal Optimization for Urban Networks

    Science.gov (United States)

    1995-01-01

    Prepared ca. 1995. This paper illustrates the use of the simulation-optimization technique of response surface methodology (RSM) in traffic signal optimization of urban networks. It also quantifies the gains of using the common random number (CRN) va...

  14. Land Surface Temperature and Emissivity Separation from Cross-Track Infrared Sounder Data with Atmospheric Reanalysis Data and ISSTES Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-Ze Zhang

    2017-01-01

    Full Text Available The Cross-track Infrared Sounder (CrIS is one of the most advanced hyperspectral instruments and has been used for various atmospheric applications such as atmospheric retrievals and weather forecast modeling. However, because of the specific design purpose of CrIS, little attention has been paid to retrieving land surface parameters from CrIS data. To take full advantage of the rich spectral information in CrIS data to improve the land surface retrievals, particularly the acquisition of a continuous Land Surface Emissivity (LSE spectrum, this paper attempts to simultaneously retrieve a continuous LSE spectrum and the Land Surface Temperature (LST from CrIS data with the atmospheric reanalysis data and the Iterative Spectrally Smooth Temperature and Emissivity Separation (ISSTES algorithm. The results show that the accuracy of the retrieved LSEs and LST is comparable with the current land products. The overall differences of the LST and LSE retrievals are approximately 1.3 K and 1.48%, respectively. However, the LSEs in our study can be provided as a continuum spectrum instead of the single-channel values in traditional products. The retrieved LST and LSEs now can be better used to further analyze the surface properties or improve the retrieval of atmospheric parameters.

  15. Continuous measurements of water surface height and width along a 6.5km river reach for discharge algorithm development

    Science.gov (United States)

    Tuozzolo, S.; Durand, M. T.; Pavelsky, T.; Pentecost, J.

    2015-12-01

    The upcoming Surface Water and Ocean Topography (SWOT) satellite will provide measurements of river width and water surface elevation and slope along continuous swaths of world rivers. Understanding water surface slope and width dynamics in river reaches is important for both developing and validating discharge algorithms to be used on future SWOT data. We collected water surface elevation and river width data along a 6.5km stretch of the Olentangy River in Columbus, Ohio from October to December 2014. Continuous measurements of water surface height were supplemented with periodical river width measurements at twenty sites along the study reach. The water surface slope of the entire reach ranged from during 41.58 cm/km at baseflow to 45.31 cm/km after a storm event. The study reach was also broken into sub-reaches roughly 1km in length to study smaller scale slope dynamics. The furthest upstream sub-reaches are characterized by free-flowing riffle-pool sequences, while the furthest downstream sub-reaches were directly affected by two low-head dams. In the sub-reaches immediately upstream of each dam, baseflow slope is as low as 2 cm/km, while the furthest upstream free-flowing sub-reach has a baseflow slope of 100 cm/km. During high flow events the backwater effect of the dams was observed to propagate upstream: sub-reaches impounded by the dams had increased water surface slopes, while free flowing sub-reaches had decreased water surface slopes. During the largest observed flow event, a stage change of 0.40 m affected sub-reach slopes by as much as 30 cm/km. Further analysis will examine height-width relationships within the study reach and relate cross-sectional flow area to river stage. These relationships can be used in conjunction with slope data to estimate discharge using a modified Manning's equation, and are a core component of discharge algorithms being developed for the SWOT mission.

  16. Kettle holes formed by glacial outburst floods: identification when their surface expression has been removed?

    Science.gov (United States)

    Marren, Philip; Fay, Helen; Duller, Robert

    2014-05-01

    Kettle holes and obstacle marks formed by the transport, deposition and burial of ice-blocks during glacial outburst floods (jökulhlaups) are a common geomorphological feature on proglacial outwash plains. Indeed, they represent one of the few features which can unequivocally identify glacially-sourced flood deposits in the geomorphological and sedimentary record. Despite an abundance of work on the surface expression of jökulhlaup-generated ice-block structures, descriptions of the subsurface expression of these features in the sedimentary record are limited. There is currently no comprehensive model of the sedimentary characteristics of these features. This is a major gap in our knowledge, as the positive identification of ice-block features constitutes an unambiguous criterion for the identification of former jökulhlaup deposits in the Quaternary sedimentary record. We address this by describing several examples of ice-block impact in the sedimentary record from southern Iceland. Our work recognizes key criteria for the identification of ice-block impact in the sedimentary record, enabling them to be identified in sedimentary sections where their geomorphological expression has since been removed or buried. These key criterion combine: (1) structures formed by the interaction of water flow with the ice-block body during transportation and immobilization; (2) distinctive sedimentological features of surrounding deposits; and, (3) the post-burial mechanical disruption on the deposits. Formulating a suite of key criteria with which to positively identify the sedimentary impact of ice-blocks limits the possibility of misidentification in the sedimentary record, and provides a means of identifying previously unrecognized Quaternary catastrophic glacial floods.

  17. Assessment of Polarization Effect on Efficiency of Levenberg-Marquardt Algorithm in Case of Thin Atmosphere over Black Surface

    Science.gov (United States)

    Korkin, S.; Lyapustin, A.

    2012-12-01

    The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD

  18. Biomimetic Bacterial Identification Platform Based on Thermal Wave Transport Analysis (TWTA) through Surface-Imprinted Polymers.

    Science.gov (United States)

    Steen Redeker, Erik; Eersels, Kasper; Akkermans, Onno; Royakkers, Jeroen; Dyson, Simba; Nurekeyeva, Kunya; Ferrando, Beniamino; Cornelis, Peter; Peeters, Marloes; Wagner, Patrick; Diliën, Hanne; van Grinsven, Bart; Cleij, Thomas Jan

    2017-05-12

    This paper introduces a novel bacterial identification assay based on thermal wave analysis through surface-imprinted polymers (SIPs). Aluminum chips are coated with SIPs, serving as synthetic cell receptors that have been combined previously with the heat-transfer method (HTM) for the selective detection of bacteria. In this work, the concept of bacterial identification is extended toward the detection of nine different bacterial species. In addition, a novel sensing approach, thermal wave transport analysis (TWTA), is introduced, which analyzes the propagation of a thermal wave through a functional interface. The results presented here demonstrate that bacterial rebinding to the SIP layer resulted in a measurable phase shift in the propagated wave, which is most pronounced at a frequency of 0.03 Hz. In this way, the sensor is able to selectively distinguish between the different bacterial species used in this study. Furthermore, a dose-response curve was constructed to determine a limit of detection of 1 × 10 4 CFU mL -1 , indicating that TWTA is advantageous over HTM in terms of sensitivity and response time. Additionally, the limit of selectivity of the sensor was tested in a mixed bacterial solution, containing the target species in the presence of a 99-fold excess of competitor species. Finally, a first application for the sensor in terms of infection diagnosis is presented, revealing that the platform is able to detect bacteria in clinically relevant concentrations as low as 3 × 10 4 CFU mL -1 in spiked urine samples.

  19. Floor Covering and Surface Identification for Assistive Mobile Robotic Real-Time Room Localization Application

    Directory of Open Access Journals (Sweden)

    Michael Gillham

    2013-12-01

    Full Text Available Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms’ flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification.

  20. Identification of the Streptococcus pyogenes surface antigens recognised by pooled human immunoglobulin

    Science.gov (United States)

    Reglinski, Mark; Gierula, Magdalena; Lynskey, Nicola N.; Edwards, Robert J.; Sriskandan, Shiranee

    2015-01-01

    Immunity to common bacteria requires the generation of antibodies that promote opsonophagocytosis and neutralise toxins. Pooled human immunoglobulin is widely advocated as an adjunctive treatment for clinical Streptococcus pyogenes infection however, the protein targets of the reagent remain ill defined. Affinity purification of the anti-streptococcal antibodies present within pooled immunoglobulin resulted in the generation of an IgG preparation that promoted opsonophagocytic killing of S. pyogenes in vitro and provided passive immunity in vivo. Isolation of the streptococcal surface proteins recognised by pooled human immunoglobulin permitted identification and ranking of 94 protein antigens, ten of which were reproducibly identified across four contemporary invasive S. pyogenes serotypes (M1, M3, M12 and M89). The data provide novel insight into the action of pooled human immunoglobulin during invasive S. pyogenes infection, and demonstrate a potential route to enhance the efficacy of antibody based therapies. PMID:26508447

  1. The retrieval of aerosol over land surface from GF-1 16m camera with Deep Blue algorithm

    Science.gov (United States)

    Zhu, Wende; Zheng, Taihao; Zhang, Luo; Wang, Lei; Cai, Kun

    2017-12-01

    As Chinese new satellite of high resolution for earth observation, the application of air pollution monitoring for GF-1 data is a key problem which need to be solved. In the paper, based on Hsu et al (2004) Deep Blue algorithm, by taking count of the characteristic of GF-1 16m camera, the contribution of land surface was removed by MODIS surface reflectance product, and aerosol optical depth (AOD) was retrieved from apparent reflectance in blue band. So, Deep Blue algorithm was applied to GF-1 16m camera successfully. Then, we collected GF-1 16m camera data over Beijing area between August and November, 2014, and the experiment of AOD was processed. It is obvious that the retrieved image showed the distribution of AOD well. At last, the AOD was validated by ground-based AOD data of AERONET/PHOTONS Beijing site. It is showed that there are good agreement between GF-1 AOD and AERONET/PHOTONS AOD, (R>0.7). But GF-1 AOD is obviously larger than ground-based AOD which may be brought by the difference of filter response function between MODIS and GF-1 camera.

  2. Predicting tooth surface loss using genetic algorithms-optimized artificial neural networks.

    Science.gov (United States)

    Al Haidan, Ali; Abu-Hammad, Osama; Dar-Odeh, Najla

    2014-01-01

    Our aim was to predict tooth surface loss in individuals without the need to conduct clinical examinations. Artificial neural networks (ANNs) were used to construct a mathematical model. Input data consisted of age, smoker status, type of tooth brush, brushing, and consumption of pickled food, fizzy drinks, orange, apple, lemon, and dried seeds. Output data were the sum of tooth surface loss scores for selected teeth. The optimized constructed ANN consisted of 2-layer network with 15 neurons in the first layer and one neuron in the second layer. The data of 46 subjects were used to build the model, while the data of 15 subjects were used to test the model. Accepting an error of ±5 scores for all chosen teeth, the accuracy of the network becomes more than 80%. In conclusion, this study shows that modeling tooth surface loss using ANNs is possible and can be achieved with a high degree of accuracy.

  3. Predicting Tooth Surface Loss Using Genetic Algorithms-Optimized Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ali Al Haidan

    2014-01-01

    Full Text Available Our aim was to predict tooth surface loss in individuals without the need to conduct clinical examinations. Artificial neural networks (ANNs were used to construct a mathematical model. Input data consisted of age, smoker status, type of tooth brush, brushing, and consumption of pickled food, fizzy drinks, orange, apple, lemon, and dried seeds. Output data were the sum of tooth surface loss scores for selected teeth. The optimized constructed ANN consisted of 2-layer network with 15 neurons in the first layer and one neuron in the second layer. The data of 46 subjects were used to build the model, while the data of 15 subjects were used to test the model. Accepting an error of ±5 scores for all chosen teeth, the accuracy of the network becomes more than 80%. In conclusion, this study shows that modeling tooth surface loss using ANNs is possible and can be achieved with a high degree of accuracy.

  4. An Algorithm for Surface Current Retrieval from X-band Marine Radar Images

    Directory of Open Access Journals (Sweden)

    Chengxi Shen

    2015-06-01

    Full Text Available In this paper, a novel current inversion algorithm from X-band marine radar images is proposed. The routine, for which deep water is assumed, begins with 3-D FFT of the radar image sequence, followed by the extraction of the dispersion shell from the 3-D image spectrum. Next, the dispersion shell is converted to a polar current shell (PCS using a polar coordinate transformation. After removing outliers along each radial direction of the PCS, a robust sinusoidal curve fitting is applied to the data points along each circumferential direction of the PCS. The angle corresponding to the maximum of the estimated sinusoid function is determined to be the current direction, and the amplitude of this sinusoidal function is the current speed. For validation, the algorithm is tested against both simulated radar images and field data collected by a vertically-polarized X-band system and ground-truthed with measurements from an acoustic Doppler current profiler (ADCP. From the field data, it is observed that when the current speed is less than 0.5 m/s, the root mean square differences between the radar-derived and the ADCP-measured current speed and direction are 7.3 cm/s and 32.7°, respectively. The results indicate that the proposed procedure, unlike most existing current inversion schemes, is not susceptible to high current speeds and circumvents the need to consider aliasing. Meanwhile, the relatively low computational cost makes it an excellent choice in practical marine applications.

  5. Robot manipulator identification based on adaptive multiple-input and multiple-output neural model optimized by advanced differential evolution algorithm

    Directory of Open Access Journals (Sweden)

    Nguyen Ngoc Son

    2016-12-01

    Full Text Available This article proposes a novel advanced differential evolution method which combines the differential evolution with the modified back-propagation algorithm. This new proposed approach is applied to train an adaptive enhanced neural model for approximating the inverse model of the industrial robot arm. Experimental results demonstrate that the proposed modeling procedure using the new identification approach obtains better convergence and more precision than the traditional back-propagation method or the lonely differential evolution approach. Furthermore, the inverse model of the industrial robot arm using the adaptive enhanced neural model performs outstanding results.

  6. Remote sensing algorithm for surface evapotranspiration considering landscape and statistical effects on mixed pixels

    Directory of Open Access Journals (Sweden)

    Z. Q. Peng

    2016-11-01

    Full Text Available Evapotranspiration (ET plays an important role in surface–atmosphere interactions and can be monitored using remote sensing data. However, surface heterogeneity, including the inhomogeneity of landscapes and surface variables, significantly affects the accuracy of ET estimated from satellite data. The objective of this study is to assess and reduce the uncertainties resulting from surface heterogeneity in remotely sensed ET using Chinese HJ-1B satellite data, which is of 30 m spatial resolution in VIS/NIR bands and 300 m spatial resolution in the thermal-infrared (TIR band. A temperature-sharpening and flux aggregation scheme (TSFA was developed to obtain accurate heat fluxes from the HJ-1B satellite data. The IPUS (input parameter upscaling and TRFA (temperature resampling and flux aggregation methods were used to compare with the TSFA in this study. The three methods represent three typical schemes used to handle mixed pixels from the simplest to the most complex. IPUS handles all surface variables at coarse resolution of 300 m in this study, TSFA handles them at 30 m resolution, and TRFA handles them at 30 and 300 m resolution, which depends on the actual spatial resolution. Analyzing and comparing the three methods can help us to get a better understanding of spatial-scale errors in remote sensing of surface heat fluxes. In situ data collected during HiWATER-MUSOEXE (Multi-Scale Observation Experiment on Evapotranspiration over heterogeneous land surfaces of the Heihe Watershed Allied Telemetry Experimental Research were used to validate and analyze the methods. ET estimated by TSFA exhibited the best agreement with in situ observations, and the footprint validation results showed that the R2, MBE, and RMSE values of the sensible heat flux (H were 0.61, 0.90, and 50.99 W m−2, respectively, and those for the latent heat flux (LE were 0.82, −20.54, and 71.24 W m−2, respectively. IPUS yielded the largest errors

  7. Land Surface Temperature Retrieval from MODIS Data by Integrating Regression Models and the Genetic Algorithm in an Arid Region

    Directory of Open Access Journals (Sweden)

    Ji Zhou

    2014-06-01

    Full Text Available The land surface temperature (LST is one of the most important parameters of surface-atmosphere interactions. Methods for retrieving LSTs from satellite remote sensing data are beneficial for modeling hydrological, ecological, agricultural and meteorological processes on Earth’s surface. Many split-window (SW algorithms, which can be applied to satellite sensors with two adjacent thermal channels located in the atmospheric window between 10 μm and 12 μm, require auxiliary atmospheric parameters (e.g., water vapor content. In this research, the Heihe River basin, which is one of the most arid regions in China, is selected as the study area. The Moderate-resolution Imaging Spectroradiometer (MODIS is selected as a test case. The Global Data Assimilation System (GDAS atmospheric profiles of the study area are used to generate the training dataset through radiative transfer simulation. Significant correlations between the atmospheric upwelling radiance in MODIS channel 31 and the other three atmospheric parameters, including the transmittance in channel 31 and the transmittance and upwelling radiance in channel 32, are trained based on the simulation dataset and formulated with three regression models. Next, the genetic algorithm is used to estimate the LST. Validations of the RM-GA method are based on the simulation dataset generated from in situ measured radiosonde profiles and GDAS atmospheric profiles, the in situ measured LSTs, and a pair of daytime and nighttime MOD11A1 products in the study area. The results demonstrate that RM-GA has a good ability to estimate the LSTs directly from the MODIS data without any auxiliary atmospheric parameters. Although this research is for local application in the Heihe River basin, the findings and proposed method can easily be extended to other satellite sensors and regions with arid climates and high elevations.

  8. Determination of dissipative Dyakonov surface waves using a finite element method based eigenvalue algorithm.

    Science.gov (United States)

    Shih, Pi-Kuei; Hsiao, Hui-Hsin; Chang, Hung-Chun

    2017-11-27

    A full-vectorial finite element method is developed to analyze the surface waves propagating at the interface between two media which could be dissipative particularly. The dissipative wave possessing a complex-valued propagation constant can be determined precisely for any given propagation direction and thus the property of losses could be thoroughly analyzed. Besides, by applying a special characteristic of the implicit circular block matrix, we reduce the computational consumptions in the analysis. By utilizing this method, the Dyakonov surface wave (DSW) at the interface between a dielectric and a metal-dielectric multilayered (MDM) structure which serves as a hyperbolic medium is discussed. Its propagation loss is smaller for larger period of the MDM structure but its field becomes less confined to the interface.

  9. A constrained reduced-dimensionality search algorithm to follow chemical reactions on potential energy surfaces

    Science.gov (United States)

    Lankau, Timm; Yu, Chin-Hui

    2013-06-01

    A constrained reduced-dimensionality algorithm can be used to efficiently locate transition states and products in reactions involving conformational changes. The search path (SP) is constructed stepwise from linear combinations of a small set of manually chosen internal coordinates, namely the predictors. The majority of the internal coordinates, the correctors, are optimized at every step of the SP to minimize the total energy of the system so that the path becomes a minimum energy path connecting products and transition states with the reactants. Problems arise when the set of predictors needs to include weak coordinates, for example, dihedral angles, as well as strong ones such as bond distances. Two principal constraining methods for the weak coordinates are proposed to mend this situation: static and dynamic constraints. Dynamic constraints are automatically activated and revoked depending on the state of the weak coordinates among the predictors, while static ones require preset control factors and act permanently. All these methods enable the successful application (4 reactions are presented involving cyclohexane, alanine dipeptide, trimethylsulfonium chloride, and azafulvene) of the reduced dimensionality method to reactions where the reaction path covers large conformational changes in addition to the formation/breaking of chemical bonds. Dynamic constraints are found to be the most efficient method as they require neither additional information about the geometry of the transition state nor fine tuning of control parameters.

  10. The Novel Artificial Intelligence Based Sub-Surface Inclusion Detection Device and Algorithm

    Directory of Open Access Journals (Sweden)

    Jong-Ha LEE

    2017-05-01

    Full Text Available We design, implement, and test a novel tactile elasticity imaging sensor to detect the elastic modulus of a contacted object. Emulating a human finger, a multi-layer polydimethylsiloxane waveguide has been fabricated as the sensing probe. The light is illuminated under the critical angle to totally reflect within the flexible and transparent waveguide. When a waveguide is compressed by an object, the contact area of the waveguide deforms and causes the light to scatter. The scattered light is captured by a high resolution camera. Multiple images are taken from slightly different loading values. The distributed forces have been estimated using the integrated pixel values of diffused lights. The displacements of the contacted object deformation have been estimated by matching the series of tactile images. For this purpose, a novel pattern matching algorithm is developed. The salient feature of this sensor is that it is capable of measuring the absolute elastic modulus value of soft materials without additional measurement units. The measurements were validated by comparing the measured elasticity of the commercial rubber samples with the known elasticity. The evaluation results showed that this type of sensor can measure elasticity within ±5.38 %.

  11. Comparing experts and novices in Martian surface feature change detection and identification

    Science.gov (United States)

    Wardlaw, Jessica; Sprinks, James; Houghton, Robert; Muller, Jan-Peter; Sidiropoulos, Panagiotis; Bamford, Steven; Marsh, Stuart

    2018-02-01

    Change detection in satellite images is a key concern of the Earth Observation field for environmental and climate change monitoring. Satellite images also provide important clues to both the past and present surface conditions of other planets, which cannot be validated on the ground. With the volume of satellite imagery continuing to grow, the inadequacy of computerised solutions to manage and process imagery to the required professional standard is of critical concern. Whilst studies find the crowd sourcing approach suitable for the counting of impact craters in single images, images of higher resolution contain a much wider range of features, and the performance of novices in identifying more complex features and detecting change, remains unknown. This paper presents a first step towards understanding whether novices can identify and annotate changes in different geomorphological features. A website was developed to enable visitors to flick between two images of the same location on Mars taken at different times and classify 1) if a surface feature changed and if so, 2) what feature had changed from a pre-defined list of six. Planetary scientists provided ;expert; data against which classifications made by novices could be compared when the project subsequently went public. Whilst no significant difference was found in images identified with surface changes by expert and novices, results exhibited differences in consensus within and between experts and novices when asked to classify the type of change. Experts demonstrated higher levels of agreement in classification of changes as dust devil tracks, slope streaks and impact craters than other features, whilst the consensus of novices was consistent across feature types; furthermore, the level of consensus amongst regardless of feature type. These trends are secondary to the low levels of consensus found, regardless of feature type or classifier expertise. These findings demand the attention of researchers who

  12. Single-source surface energy balance algorithms to estimate evapotranspiration from satellite-based remotely sensed data

    Science.gov (United States)

    Bhattarai, Nishan

    The flow of water and energy fluxes at the Earth's surface and within the climate system is difficult to quantify. Recent advances in remote sensing technologies have provided scientists with a useful means to improve characterization of these complex processes. However, many challenges remain that limit our ability to optimize remote sensing data in determining evapotranspiration (ET) and energy fluxes. For example, periodic cloud cover limits the operational use of remotely sensed data from passive sensors in monitoring seasonal fluxes. Additionally, there are many remote sensing-based single-source surface energy balance (SEB) models, but no clear guidance on which one to use in a particular application. Two widely used models---surface energy balance algorithm for land (SEBAL) and mapping ET at high resolution with internalized calibration (METRIC)---need substantial human-intervention that limits their applicability in broad-scale studies. This dissertation addressed some of these challenges by proposing novel ways to optimize available resources within the SEB-based ET modeling framework. A simple regression-based Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) fusion model was developed to integrate Landsat spatial and MODIS temporal characteristics in calculating ET. The fusion model produced reliable estimates of seasonal ET at moderate spatial resolution while mitigating the impact that cloud cover can have on image availability. The dissertation also evaluated five commonly used remote sensing-based single-source SEB models and found the surface energy balance system (SEBS) may be the best overall model for use in humid subtropical climates. The study also determined that model accuracy varies with land cover type, for example, all models worked well for wet marsh conditions, but the SEBAL and simplified surface energy balance index (S-SEBI) models worked better than the alternatives for grass cover. A new automated approach based on

  13. Solvent-assisted multistage nonequilibrium electron transfer in rigid supramolecular systems: Diabatic free energy surfaces and algorithms for numerical simulations

    Science.gov (United States)

    Feskov, Serguei V.; Ivanov, Anatoly I.

    2018-03-01

    An approach to the construction of diabatic free energy surfaces (FESs) for ultrafast electron transfer (ET) in a supramolecule with an arbitrary number of electron localization centers (redox sites) is developed, supposing that the reorganization energies for the charge transfers and shifts between all these centers are known. Dimensionality of the coordinate space required for the description of multistage ET in this supramolecular system is shown to be equal to N - 1, where N is the number of the molecular centers involved in the reaction. The proposed algorithm of FES construction employs metric properties of the coordinate space, namely, relation between the solvent reorganization energy and the distance between the two FES minima. In this space, the ET reaction coordinate zn n' associated with electron transfer between the nth and n'th centers is calculated through the projection to the direction, connecting the FES minima. The energy-gap reaction coordinates zn n' corresponding to different ET processes are not in general orthogonal so that ET between two molecular centers can create nonequilibrium distribution, not only along its own reaction coordinate but along other reaction coordinates too. This results in the influence of the preceding ET steps on the kinetics of the ensuing ET. It is important for the ensuing reaction to be ultrafast to proceed in parallel with relaxation along the ET reaction coordinates. Efficient algorithms for numerical simulation of multistage ET within the stochastic point-transition model are developed. The algorithms are based on the Brownian simulation technique with the recrossing-event detection procedure. The main advantages of the numerical method are (i) its computational complexity is linear with respect to the number of electronic states involved and (ii) calculations can be naturally parallelized up to the level of individual trajectories. The efficiency of the proposed approach is demonstrated for a model

  14. Identification of organic colorants in fibers, paints, and glazes by surface enhanced Raman spectroscopy.

    Science.gov (United States)

    Casadio, Francesca; Leona, Marco; Lombardi, John R; Van Duyne, Richard

    2010-06-15

    Organic dyes extracted from plants, insects, and shellfish have been used for millennia in dyeing textiles and manufacturing colorants for painting. The economic push for dyes with high tinting strength, directly related to high extinction coefficients in the visible range, historically led to the selection of substances that could be used at low concentrations. But a desirable property for the colorist is a major problem for the analytical chemist; the identification of dyes in cultural heritage objects is extremely difficult. Techniques routinely used in the identification of inorganic pigments are generally not applicable to dyes: X-ray fluorescence because of the lack of an elemental signature, Raman spectroscopy because of the generally intense luminescence of dyes, and Fourier transform infrared spectroscopy because of the interference of binders and extenders. Traditionally, the identification of dyes has required relatively large samples (0.5-5 mm in diameter) for analysis by high-performance liquid chromatography. In this Account, we describe our efforts to develop practical approaches in identifying dyes in works of art from samples as small as 25 microm in diameter with surface-enhanced Raman scattering (SERS). In SERS, the Raman scattering signal is greatly enhanced when organic molecules with large delocalized electron systems are adsorbed on atomically rough metallic substrates; fluorescence is concomitantly quenched. Recent nanotechnological advances in preparing and manipulating metallic particles have afforded staggering enhancement factors of up to 10(14). SERS is thus an ideal technique for the analysis of dyes. Indeed, rhodamine 6G and crystal violet, two organic compounds used to demonstrate the sensitivity of SERS at the single-molecule level, were first synthesized as textile dyes in the second half of the 19th century. In this Account, we examine the practical application of SERS to cultural heritage studies, including the selection of

  15. Automatic Frequency Identification under Sample Loss in Sinusoidal Pulse Width Modulation Signals Using an Iterative Autocorrelation Algorithm

    Directory of Open Access Journals (Sweden)

    Alejandro Said

    2016-08-01

    Full Text Available In this work, we present a simple algorithm to calculate automatically the Fourier spectrum of a Sinusoidal Pulse Width Modulation Signal (SPWM. Modulated voltage signals of this kind are used in industry by speed drives to vary the speed of alternating current motors while maintaining a smooth torque. Nevertheless, the SPWM technique produces undesired harmonics, which yield stator heating and power losses. By monitoring these signals without human interaction, it is possible to identify the harmonic content of SPWM signals in a fast and continuous manner. The algorithm is based in the autocorrelation function, commonly used in radar and voice signal processing. Taking advantage of the symmetry properties of the autocorrelation, the algorithm is capable of estimating half of the period of the fundamental frequency; thus, allowing one to estimate the necessary number of samples to produce an accurate Fourier spectrum. To deal with the loss of samples, i.e., the scan backlog, the algorithm iteratively acquires and trims the discrete sequence of samples until the required number of samples reaches a stable value. The simulation shows that the algorithm is not affected by either the magnitude of the switching pulses or the acquisition noise.

  16. A Semiautomated Multilayer Picking Algorithm for Ice-sheet Radar Echograms Applied to Ground-Based Near-Surface Data

    Science.gov (United States)

    Onana, Vincent De Paul; Koenig, Lora Suzanne; Ruth, Julia; Studinger, Michael; Harbeck, Jeremy P.

    2014-01-01

    Snow accumulation over an ice sheet is the sole mass input, making it a primary measurement for understanding the past, present, and future mass balance. Near-surface frequency-modulated continuous-wave (FMCW) radars image isochronous firn layers recording accumulation histories. The Semiautomated Multilayer Picking Algorithm (SAMPA) was designed and developed to trace annual accumulation layers in polar firn from both airborne and ground-based radars. The SAMPA algorithm is based on the Radon transform (RT) computed by blocks and angular orientations over a radar echogram. For each echogram's block, the RT maps firn segmented-layer features into peaks, which are picked using amplitude and width threshold parameters of peaks. A backward RT is then computed for each corresponding block, mapping the peaks back into picked segmented-layers. The segmented layers are then connected and smoothed to achieve a final layer pick across the echogram. Once input parameters are trained, SAMPA operates autonomously and can process hundreds of kilometers of radar data picking more than 40 layers. SAMPA final pick results and layer numbering still require a cursory manual adjustment to correct noncontinuous picks, which are likely not annual, and to correct for inconsistency in layer numbering. Despite the manual effort to train and check SAMPA results, it is an efficient tool for picking multiple accumulation layers in polar firn, reducing time over manual digitizing efforts. The trackability of good detected layers is greater than 90%.

  17. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    Science.gov (United States)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  18. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at low...... SNR due to noise induced bias. The IQML cost function can be “denoised” by eliminating the noise contribution: the resulting algorithm, Denoised IQML (DIQML), gives consistent estimates and outperforms IQML. Furthermore, DIQML is asymptotically globally convergent and hence insensitive...... to the initialization. Its asymptotic performance does not reach the DML performance though. The second strategy, called Pseudo-Quadratic ML (PQML), is naturally denoised. The denoising in PQML is furthermore more efficient than in DIQML: PQML yields the same asymptotic performance as DML, as opposed to DIQML...

  19. Advancing of Land Surface Temperature Retrieval Using Extreme Learning Machine and Spatio-Temporal Adaptive Data Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2015-04-01

    Full Text Available As a critical variable to characterize the biophysical processes in ecological environment, and as a key indicator in the surface energy balance, evapotranspiration and urban heat islands, Land Surface Temperature (LST retrieved from Thermal Infra-Red (TIR images at both high temporal and spatial resolution is in urgent need. However, due to the limitations of the existing satellite sensors, there is no earth observation which can obtain TIR at detailed spatial- and temporal-resolution simultaneously. Thus, several attempts of image fusion by blending the TIR data from high temporal resolution sensor with data from high spatial resolution sensor have been studied. This paper presents a novel data fusion method by integrating image fusion and spatio-temporal fusion techniques, for deriving LST datasets at 30 m spatial resolution from daily MODIS image and Landsat ETM+ images. The Landsat ETM+ TIR data were firstly enhanced based on extreme learning machine (ELM algorithm using neural network regression model, from 60 m to 30 m resolution. Then, the MODIS LST and enhanced Landsat ETM+ TIR data were fused by Spatio-temporal Adaptive Data Fusion Algorithm for Temperature mapping (SADFAT in order to derive high resolution synthetic data. The synthetic images were evaluated for both testing and simulated satellite images. The average difference (AD and absolute average difference (AAD are smaller than 1.7 K, where the correlation coefficient (CC and root-mean-square error (RMSE are 0.755 and 1.824, respectively, showing that the proposed method enhances the spatial resolution of the predicted LST images and preserves the spectral information at the same time.

  20. An algorithm for analytical solution of basic problems featuring elastostatic bodies with cavities and surface flaws

    Science.gov (United States)

    Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.

    2018-03-01

    Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.

  1. An Orthogonal Projection Algorithm to Suppress Interference in High-Frequency Surface Wave Radar

    Directory of Open Access Journals (Sweden)

    Zezong Chen

    2018-03-01

    Full Text Available High-frequency surface wave radar (HFSWR has been widely applied in sea-state monitoring, and its performance is known to suffer from various unwanted interferences and clutters. Radio frequency interference (RFI from other radiating sources and ionospheric clutter dominate the various types of unwanted signals because the HF band is congested with many users and the ionosphere propagates interference from distant sources. In this paper, various orthogonal projection schemes are summarized, and three new schemes are proposed for interference cancellation. Simulations and field data recorded by experimental multi-frequency HFSWR from Wuhan University are used to evaluate the cancellation performances of these schemes with respect to both RFI and ionospheric clutter. The processing results may provide a guideline for identifying the appropriate orthogonal projection cancellation schemes in various HFSWR applications.

  2. Self-Assembled Nanocube-Based Plasmene Nanosheets as Soft Surface-Enhanced Raman Scattering Substrates toward Direct Quantitative Drug Identification on Surfaces.

    Science.gov (United States)

    Si, Kae Jye; Guo, Pengzhen; Shi, Qianqian; Cheng, Wenlong

    2015-05-19

    We report on self-assembled nanocube-based plasmene nanosheets as new surface-enhanced Raman scattering (SERS) substrates toward direct identification of a trace amount of drugs sitting on topologically complex real-world surfaces. The uniform nanocube arrays (superlattices) led to low spatial SERS signal variances (∼2%). Unlike conventional SERS substrates which are based on rigid nanostructured metals, our plasmene nanosheets are mechanically soft and optically semitransparent, enabling conformal attachment to real-world solid surfaces such as banknotes for direct SERS identification of drugs. Our plasmene nanosheets were able to detect benzocaine overdose down to a parts-per-billion (ppb) level with an excellent linear relationship (R(2) > 0.99) between characteristic peak intensity and concentration. On banknote surfaces, a detection limit of ∼0.9 × 10(-6) g/cm(2) benzocaine could be achieved. Furthermore, a few other drugs could also be identified, even in their binary mixtures with our plasmene nanosheets. Our experimental results clearly show that our plasmene sheets represent a new class of unique SERS substrates, potentially serving as a versatile platform for real-world forensic drug identification.

  3. Identification of Patients with Statin Intolerance in a Managed Care Plan: A Comparison of 2 Claims-Based Algorithms.

    Science.gov (United States)

    Bellows, Brandon K; Sainski-Nguyen, Amy M; Olsen, Cody J; Boklage, Susan H; Charland, Scott; Mitchell, Matthew P; Brixner, Diana I

    2017-09-01

    While statins are safe and efficacious, some patients may experience statin intolerance or treatment-limiting adverse events. Identifying patients with statin intolerance may allow optimal management of cardiovascular event risk through other strategies. Recently, an administrative claims data (ACD) algorithm was developed to identify patients with statin intolerance and validated against electronic medical records. However, how this algorithm compared with perceptions of statin intolerance by integrated delivery networks remains largely unknown. To determine the concurrent validity of an algorithm developed by a regional integrated delivery network multidisciplinary panel (MP) and a published ACD algorithm in identifying patients with statin intolerance. The MP consisted of 3 physicians and 2 pharmacists with expertise in cardiology, internal medicine, and formulary management. The MP algorithm used pharmacy and medical claims to identify patients with statin intolerance, classifying them as having statin intolerance if they met any of the following criteria: (a) medical claim for rhabdomyolysis, (b) medical claim for muscle weakness, (c) an outpatient medical claim for creatinine kinase assay, (d) fills for ≥ 2 different statins excluding dose increases, (e) decrease in statin dose, or (f) discontinuation of a statin with a subsequent fill for a nonstatin lipid-lowering therapy. The validated ACD algorithm identified statin intolerance as absolute intolerance with rhabdomyolysis; absolute intolerance without rhabdomyolysis (i.e., other adverse events); or as dose titration intolerance. Adult patients (aged ≥ 18 years) from the integrated delivery network with at least 1 prescription fill for a statin between January 1, 2011, and December 31, 2012 (first fill defined the index date) were identified. Patients with ≥ 1 year pre- and ≥ 2 years post-index continuous enrollment and no statin prescription fills in the pre-index period were included. The MP and

  4. Cuckoo Search Algorithm with Lévy Flights for Global-Support Parametric Surface Approximation in Reverse Engineering

    Directory of Open Access Journals (Sweden)

    Andrés Iglesias

    2018-03-01

    Full Text Available This paper concerns several important topics of the Symmetry journal, namely, computer-aided design, computational geometry, computer graphics, visualization, and pattern recognition. We also take advantage of the symmetric structure of the tensor-product surfaces, where the parametric variables u and v play a symmetric role in shape reconstruction. In this paper we address the general problem of global-support parametric surface approximation from clouds of data points for reverse engineering applications. Given a set of measured data points, the approximation is formulated as a nonlinear continuous least-squares optimization problem. Then, a recent metaheuristics called Cuckoo Search Algorithm (CSA is applied to compute all relevant free variables of this minimization problem (namely, the data parameters and the surface poles. The method includes the iterative generation of new solutions by using the Lévy flights to promote the diversity of solutions and prevent stagnation. A critical advantage of this method is its simplicity: the CSA requires only two parameters, many fewer than any other metaheuristic approach, so the parameter tuning becomes a very easy task. The method is also simple to understand and easy to implement. Our approach has been applied to a benchmark of three illustrative sets of noisy data points corresponding to surfaces exhibiting several challenging features. Our experimental results show that the method performs very well even for the cases of noisy and unorganized data points. Therefore, the method can be directly used for real-world applications for reverse engineering without further pre/post-processing. Comparative work with the most classical mathematical techniques for this problem as well as a recent modification of the CSA called Improved CSA (ICSA is also reported. Two nonparametric statistical tests show that our method outperforms the classical mathematical techniques and provides equivalent results to ICSA

  5. The Parallel SBAS-DInSAR algorithm: an effective and scalable tool for Earth's surface displacement retrieval

    Science.gov (United States)

    Zinno, Ivana; De Luca, Claudio; Elefante, Stefano; Imperatore, Pasquale; Manunta, Michele; Casu, Francesco

    2014-05-01

    been carried out on real data acquired by ENVISAT and COSMO-SkyMed sensors. Moreover, the P-SBAS performances with respect to the size of the input dataset will also be investigated. This kind of analysis is essential for assessing the goodness of the P-SBAS algorithm and gaining insight into its applicability to different scenarios. Besides, such results will also become crucial to identify and evaluate how to appropriately exploit P-SBAS to process the forthcoming large Sentinel-1 data stream. References [1] Massonnet, D., Briole, P., Arnaud, A., "Deflation of Mount Etna monitored by Spaceborne Radar Interferometry", Nature, vol. 375, pp. 567-570, 1995. [2] Berardino, P., G. Fornaro, R. Lanari, and E. Sansosti, "A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms", IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002. [3] Elefante, S., Imperatore, P. , Zinno, I., M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, F. Casu, "SBAS-DINSAR Time series generation on cloud computing platforms", IEEE IGARSS 2013, July 2013, Melbourne (AU). [4] Zinno, P. Imperatore, S. Elefante, F. Casu, M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, "A Novel Parallel Computational Framework for Processing Large INSAR Data Sets", Living Planet Symposium 2013, Sept. 9-13, 2013.

  6. Comment on ``Identification of low order manifolds: Validating the algorithm of Maas and Pope'' [Chaos 9, 108-123 (1999)

    Science.gov (United States)

    Flockerzi, Dietrich; Heineken, Wolfram

    2006-12-01

    It is claimed by Rhodes, Morari, and Wiggins [Chaos 9, 108-123 (1999)] that the projection algorithm of Maas and Pope [Combust. Flame 88, 239-264 (1992)] identifies the slow invariant manifold of a system of ordinary differential equations with time-scale separation. A transformation to Fenichel normal form serves as a tool to prove this statement. Furthermore, Rhodes, Morari, and Wiggins [Chaos 9, 108-123 (1999)] conjectured that away from a slow manifold, the criterion of Maas and Pope will never be fulfilled. We present two examples that refute the assertions of Rhodes, Morari, and Wiggins. In the first example, the algorithm of Maas and Pope leads to a manifold that is not invariant but close to a slow invariant manifold. The claim of Rhodes, Morari, and Wiggins that the Maas and Pope projection algorithm is invariant under a coordinate transformation to Fenichel normal form is shown to be not correct in this case. In the second example, the projection algorithm of Maas and Pope leads to a manifold that lies in a region where no slow manifold exists at all. This rejects the conjecture of Rhodes, Morari, and Wiggins mentioned above.

  7. Identification of Onset Of Fatigue in Biceps Brachii Muscles Using Surface EMG and Multifractal DMA Alogrithm.

    Science.gov (United States)

    Marri, Kiran; Swaminathan, Ramakrishnan

    2015-01-01

    Prolonged and repeated fatigue conditions can cause muscle damage and adversely impact coordination in dynamic contractions. Hence it is important to determine the onset of muscle fatigue (OMF) in clinical rehabilitation and sports medicine. The aim of this study is to propose a method for analyzing surface electromyography (sEMG) signals and identify OMF using multifractal detrending moving average algorithm (MFDMA). Signals are recorded from biceps brachii muscles of twenty two healthy volunteers while performing standard curl exercise. The first instance of muscle discomfort during curl exercise is considered as experimental OMF. Signals are pre-processed and divided into 1-second epoch for MFDMA analysis. Degree of multifractality (DOM) feature is calculated from multifractal spectrum. Further, the variance of DOM is computed and OMF is calculated from instances of high peaks. The analysis is carried out by dividing the entire duration into six equal zones for time axis normalization. High peaks are observed in zones where subjects reported muscle discomfort. First muscle discomfort occurred in third and forth zones for majority of subjects. The calculated and experimental muscle discomfort zone closely matched in 72% of subjects indicating that multifractal technique may be a good method for detecting onset of fatigue. The experimental data may have an element of subjectivity in identifying muscle discomfort. This work can also be useful to analyze progressive changes in muscle dynamics in neuromuscular condition and co-contraction activity.

  8. Online Surface Defect Identification of Cold Rolled Strips Based on Local Binary Pattern and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2018-03-01

    Full Text Available In the production of cold-rolled strip, the strip surface may suffer from various defects which need to be detected and identified using an online inspection system. The system is equipped with high-speed and high-resolution cameras to acquire images from the moving strip surface. Features are then extracted from the images and are used as inputs of a pre-trained classifier to identify the type of defect. New types of defect often appear in production. At this point the pre-trained classifier needs to be quickly retrained and deployed in seconds to meet the requirement of the online identification of all defects in the environment of a continuous production line. Therefore, the method for extracting the image features and the training for the classification model should be automated and fast enough, normally within seconds. This paper presents our findings in investigating the computational and classification performance of various feature extraction methods and classification models for the strip surface defect identification. The methods include Scale Invariant Feature Transform (SIFT, Speeded Up Robust Features (SURF and Local Binary Patterns (LBP. The classifiers we have assessed include Back Propagation (BP neural network, Support Vector Machine (SVM and Extreme Learning Machine (ELM. By comparing various combinations of different feature extraction and classification methods, our experiments show that the hybrid method of LBP for feature extraction and ELM for defect classification results in less training and identification time with higher classification accuracy, which satisfied online real-time identification.

  9. Identification and Quantification of Celery Allergens Using Fiber Optic Surface Plasmon Resonance PCR

    Science.gov (United States)

    Daems, Devin; Peeters, Bernd; Delport, Filip; Remans, Tony; Lammertyn, Jeroen; Spasic, Dragana

    2017-01-01

    Accurate identification and quantification of allergens is key in healthcare, biotechnology and food quality and safety. Celery (Apium graveolens) is one of the most important elicitors of food allergic reactions in Europe. Currently, the golden standards to identify, quantify and discriminate celery in a biological sample are immunoassays and two-step molecular detection assays in which quantitative PCR (qPCR) is followed by a high-resolution melting analysis (HRM). In order to provide a DNA-based, rapid and simple detection method suitable for one-step quantification, a fiber optic PCR melting assay (FO-PCR-MA) was developed to determine different concentrations of celery DNA (1 pM–0.1 fM). The presented method is based on the hybridization and melting of DNA-coated gold nanoparticles to the FO sensor surface in the presence of the target gene (mannitol dehydrogenase, Mtd). The concept was not only able to reveal the presence of celery DNA, but also allowed for the cycle-to-cycle quantification of the target sequence through melting analysis. Furthermore, the developed bioassay was benchmarked against qPCR followed by HRM, showing excellent agreement (R2 = 0.96). In conclusion, this innovative and sensitive diagnostic test could further improve food quality control and thus have a large impact on allergen induced healthcare problems. PMID:28758965

  10. Identification and Quantification of Celery Allergens Using Fiber Optic Surface Plasmon Resonance PCR

    Directory of Open Access Journals (Sweden)

    Devin Daems

    2017-07-01

    Full Text Available Abstract: Accurate identification and quantification of allergens is key in healthcare, biotechnology and food quality and safety. Celery (Apium graveolens is one of the most important elicitors of food allergic reactions in Europe. Currently, the golden standards to identify, quantify and discriminate celery in a biological sample are immunoassays and two-step molecular detection assays in which quantitative PCR (qPCR is followed by a high-resolution melting analysis (HRM. In order to provide a DNA-based, rapid and simple detection method suitable for one-step quantification, a fiber optic PCR melting assay (FO-PCR-MA was developed to determine different concentrations of celery DNA (1 pM–0.1 fM. The presented method is based on the hybridization and melting of DNA-coated gold nanoparticles to the FO sensor surface in the presence of the target gene (mannitol dehydrogenase, Mtd. The concept was not only able to reveal the presence of celery DNA, but also allowed for the cycle-to-cycle quantification of the target sequence through melting analysis. Furthermore, the developed bioassay was benchmarked against qPCR followed by HRM, showing excellent agreement (R2 = 0.96. In conclusion, this innovative and sensitive diagnostic test could further improve food quality control and thus have a large impact on allergen induced healthcare problems.

  11. Damage identification from uniform load surface using continuous and stationary wavelet transforms

    Directory of Open Access Journals (Sweden)

    M. Masoumi

    Full Text Available Derived from flexibility matrix, Uniform Load Surface (ULS is used to identify damages in beam-type structures. This method is beneficial in terms of more participating the lower order modes and having less prone to noise and irregularities in the measured data in comparison with the original flexibility matrix technique. Therefore, these characteristics make this approach a practical tool in the field of damage identification. This paper presents a procedure to employ stationary wavelet transform multi-resolution analysis (SWT-MRA to refine ULS obtained from the damaged structure and then using continuous wavelet transform (CWT for localizing the discontinuity of improved ULS as a sign of damage site. Evaluation of the proposed method is carried out by examining a cantilever beam as a numerical case, where the ULS is formed by using mode shapes of damaged beam and two kinds of wavelets (i.e. symmetrical 4 and bior 6.8 is applied for discerning the induced crack. Moreover, a laboratory test is conducted on a free-free beam to experimentally evaluate the practicability of the technique.

  12. In situ detection and identification of hair dyes using surface-enhanced Raman spectroscopy (SERS).

    Science.gov (United States)

    Kurouski, Dmitry; Van Duyne, Richard P

    2015-03-03

    Hair is one of the most common types of physical evidence found at a crime scene. Forensic examination may suggest a connection between a suspect and a crime scene or victim, or it may demonstrate an absence of such associations. Therefore, forensic analysis of hair evidence is invaluable to criminal investigations. Current hair forensic examinations are primarily based on a subjective microscopic comparison of hair found at the crime scene with a sample of suspect's hair. Since this is often inconclusive, the development of alternative and more-accurate hair analysis techniques is critical. In this study, we utilized surface-enhanced Raman spectroscopy (SERS) to demonstrate that artificial dyes can be directly detected on hair. This spectroscopic technique is capable of a confirmatory identification of analytes with single molecule resolution, requires minimal sample, and has the advantage of fluorescence quenching. Our study reveals that SERS can (1) identify whether hair was artificially dyed or not, (2) determine if a permanent or semipermanent colorants were used, and (3) distinguish the commercial brands that are utilized to dye hair. Such analysis is rapid, minimally destructive, and can be performed directly at the crime scene. This study provides a novel perspective of forensic investigations of hair evidence.

  13. High-resolution random mesh algorithms for creating a probabilistic 3D surface atlas of the human brain.

    Science.gov (United States)

    Thompson, P M; Schwartz, C; Toga, A W

    1996-02-01

    Striking variations exist, across individuals, in the internal and external geometry of the brain. Such normal variations in the size, orientation, topology, and geometric complexity of cortical and subcortical structures have complicated the problem of quantifying deviations from normal anatomy and of developing standardized neuroanatomical atlases. This paper describes the design, implementation, and results of a technique for creating a three-dimensional (3D) probabilistic surface atlas of the human brain. We have developed, implemented, and tested a new 3D statistical method for assessing structural variations in a data-base of anatomic images. The algorithm enables the internal surface anatomy of new subjects to be analyzed at an extremely local level. The goal was to quantify subtle and distributed patterns of deviation from normal anatomy by automatically generating detailed probability maps of the anatomy of new subjects. Connected systems of parametric meshes were used to model the internal course of the following structures in both hemispheres: the parieto-occipital sulcus, the anterior and posterior rami of the calcarine sulcus, the cingulate and marginal sulci, and the supracallosal sulcus. These sulci penetrate sufficiently deeply into the brain to introduce an obvious topological decomposition of its volume architecture. A family of surface maps was constructed, encoding statistical properties of local anatomical variation within individual sulci. A probability space of random transformations, based on the theory of Gaussian random fields, was developed to reflect the observed variability in stereotaxic space of the connected system of anatomic surfaces. A complete system of probability density functions was computed, yielding confidence limits on surface variation. The ultimate goal of brain mapping is to provide a framework for integrating functional and anatomical data across many subjects and modalities. This task requires precise quantitative

  14. MAGIC: an automated N-linked glycoprotein identification tool using a Y1-ion pattern matching algorithm and in silico MS² approach.

    Science.gov (United States)

    Lynn, Ke-Shiuan; Chen, Chen-Chun; Lih, T Mamie; Cheng, Cheng-Wei; Su, Wan-Chih; Chang, Chun-Hao; Cheng, Chia-Ying; Hsu, Wen-Lian; Chen, Yu-Ju; Sung, Ting-Yi

    2015-02-17

    Glycosylation is a highly complex modification influencing the functions and activities of proteins. Interpretation of intact glycopeptide spectra is crucial but challenging. In this paper, we present a mass spectrometry-based automated glycopeptide identification platform (MAGIC) to identify peptide sequences and glycan compositions directly from intact N-linked glycopeptide collision-induced-dissociation spectra. The identification of the Y1 (peptideY0 + GlcNAc) ion is critical for the correct analysis of unknown glycoproteins, especially without prior knowledge of the proteins and glycans present in the sample. To ensure accurate Y1-ion assignment, we propose a novel algorithm called Trident that detects a triplet pattern corresponding to [Y0, Y1, Y2] or [Y0-NH3, Y0, Y1] from the fragmentation of the common trimannosyl core of N-linked glycopeptides. To facilitate the subsequent peptide sequence identification by common database search engines, MAGIC generates in silico spectra by overwriting the original precursor with the naked peptide m/z and removing all of the glycan-related ions. Finally, MAGIC computes the glycan compositions and ranks them. For the model glycoprotein horseradish peroxidase (HRP) and a 5-glycoprotein mixture, a 2- to 31-fold increase in the relative intensities of the peptide fragments was achieved, which led to the identification of 7 tryptic glycopeptides from HRP and 16 glycopeptides from the mixture via Mascot. In the HeLa cell proteome data set, MAGIC processed over a thousand MS(2) spectra in 3 min on a PC and reported 36 glycopeptides from 26 glycoproteins. Finally, a remarkable false discovery rate of 0 was achieved on the N-glycosylation-free Escherichia coli data set. MAGIC is available at http://ms.iis.sinica.edu.tw/COmics/Software_MAGIC.html .

  15. Identification and quantification of nitrate inputs into surface water in Flanders, Belgium

    Science.gov (United States)

    Xue, Dongmei; de Baets, Bernard; Botte, Jorin; Vermeulen, Jan; van Cleemput, Oswald; Boeckx, Pascal

    2010-05-01

    Nitrate (NO3-) contamination in surface water in Flanders (Belgium) is a pressing environmental problem. These NO3- loads are attributed to intensive agriculture, use of fertilizers and manure and discharge of human sewage. The Flemish Environmental Agency (VMM) has an operational network for monitoring surface water quality. An apriori NO3- source classification has been provided based on NO3- concentration variation and land use. The 5 potential NO3- source classes are as follows: greenhouses, agriculture, agriculture with groundwater dilution, households and a combination of horticulture and agriculture. However, NO3- concentration data alone can not fully assess the extent of the input of various NO3- sources, which is a key aspect in monitoring water quality. Hence, this study will apply a dual isotope approach (δ15N- and δ18O-NO3-) and a Bayesian isotope mixing model (SIAR) (http://cran.r-project.org/web/packages/siar/siar.pdf) to identify and quantify NO3- sources in surface water. Thirty sample points (6 sample points per apriori NO3- source class), distributed over the whole of Flanders, were selected for NO3- source identification and quantification based on monthly measured δ15N- and δ18O-NO3- data. So far (from October 2007 to March 2009) we observed isotopic values ranging from -9.5 to 28.6‰ for δ15N and -9.1 to 51.1‰ for δ18O. The output of proportional NO3- source contributions via SIAR revealed that all of the water samples are a mixture of multiple nitrate sources, with manure or sewage as the dominant source. A clear seasonal trend can be found for the greenhouse class shifting from nitrate in precipitation in summer to manure and sewage in winter. Furthermore, the outputs of source contributions analyzed by SIAR are used to redefine the source classes of the 30 isotope monitoring sample points, as some points might be classified into the wrong class only based on expert-knowledge.

  16. The Algorithm Theoretical Basis Document for the Derivation of Range and Range Distributions from Laser Pulse Waveform Analysis for Surface Elevations, Roughness, Slope, and Vegetation Heights

    Science.gov (United States)

    Brenner, Anita C.; Zwally, H. Jay; Bentley, Charles R.; Csatho, Bea M.; Harding, David J.; Hofton, Michelle A.; Minster, Jean-Bernard; Roberts, LeeAnne; Saba, Jack L.; Thomas, Robert H.; hide

    2012-01-01

    The primary purpose of the GLAS instrument is to detect ice elevation changes over time which are used to derive changes in ice volume. Other objectives include measuring sea ice freeboard, ocean and land surface elevation, surface roughness, and canopy heights over land. This Algorithm Theoretical Basis Document (ATBD) describes the theory and implementation behind the algorithms used to produce the level 1B products for waveform parameters and global elevation and the level 2 products that are specific to ice sheet, sea ice, land, and ocean elevations respectively. These output products, are defined in detail along with the associated quality, and the constraints, and assumptions used to derive them.

  17. Identification and Discrimination of Brands of Fuels by Gas Chromatography and Neural Networks Algorithm in Forensic Research.

    Science.gov (United States)

    Ugena, L; Moncayo, S; Manzoor, S; Rosales, D; Cáceres, J O

    2016-01-01

    The detection of adulteration of fuels and its use in criminal scenes like arson has a high interest in forensic investigations. In this work, a method based on gas chromatography (GC) and neural networks (NN) has been developed and applied to the identification and discrimination of brands of fuels such as gasoline and diesel without the necessity to determine the composition of the samples. The study included five main brands of fuels from Spain, collected from fifteen different local petrol stations. The methodology allowed the identification of the gasoline and diesel brands with a high accuracy close to 100%, without any false positives or false negatives. A success rate of three blind samples was obtained as 73.3%, 80%, and 100%, respectively. The results obtained demonstrate the potential of this methodology to help in resolving criminal situations.

  18. Identification and Discrimination of Brands of Fuels by Gas Chromatography and Neural Networks Algorithm in Forensic Research

    Directory of Open Access Journals (Sweden)

    L. Ugena

    2016-01-01

    Full Text Available The detection of adulteration of fuels and its use in criminal scenes like arson has a high interest in forensic investigations. In this work, a method based on gas chromatography (GC and neural networks (NN has been developed and applied to the identification and discrimination of brands of fuels such as gasoline and diesel without the necessity to determine the composition of the samples. The study included five main brands of fuels from Spain, collected from fifteen different local petrol stations. The methodology allowed the identification of the gasoline and diesel brands with a high accuracy close to 100%, without any false positives or false negatives. A success rate of three blind samples was obtained as 73.3%, 80%, and 100%, respectively. The results obtained demonstrate the potential of this methodology to help in resolving criminal situations.

  19. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA to estimate actual evapotranspiration over heterogeneous terrain

    Directory of Open Access Journals (Sweden)

    Z. Q. Gao

    2011-01-01

    Full Text Available Evapotranspiration (ET may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA. With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM, and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  20. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA) to estimate actual evapotranspiration over heterogeneous terrain

    Science.gov (United States)

    Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N.-B.

    2011-01-01

    Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement). Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  1. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA) to estimate actual evapotranspiration under complex terrain

    Science.gov (United States)

    Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.

    2010-07-01

    Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  2. Surface electromyography based muscle fatigue detection using high-resolution time-frequency methods and machine learning algorithms.

    Science.gov (United States)

    Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S

    2018-02-01

    Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to

  3. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    Science.gov (United States)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  4. Identification of Bacterial Surface Antigens by Screening Peptide Phage Libraries Using Whole Bacteria Cell-Purified Antisera

    Science.gov (United States)

    Hu, Yun-Fei; Zhao, Dun; Yu, Xing-Long; Hu, Yu-Li; Li, Run-Cheng; Ge, Meng; Xu, Tian-Qi; Liu, Xiao-Bo; Liao, Hua-Yuan

    2017-01-01

    Bacterial surface proteins can be good vaccine candidates. In the present study, we used polyclonal antibodies purified with intact Erysipelothrix rhusiopthiae to screen phage-displayed random dodecapeptide and loop-constrained heptapeptide libraries, which led to the identification of mimotopes. Homology search of the mimotope sequences against E. rhusiopthiae-encoded ORF sequences revealed 14 new antigens that may localize on the surface of E. rhusiopthiae. When these putative surface proteins were used to immunize mice, 9/11 antigens induced protective immunity. Thus, we have demonstrated that a combination of using the whole bacterial cells to purify antibodies and using the phage-displayed peptide libraries to determine the antigen specificities of the antibodies can lead to the discovery of novel bacterial surface antigens. This can be a general approach for identifying surface antigens for other bacterial species. PMID:28184219

  5. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm; Metodo de identificacao de transientes com abordagem possibilistica, otimizado por algoritmo genetico

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  6. Cardiac MRI in mice at 9.4 Tesla with a transmit-receive surface coil and a cardiac-tailored intensity-correction algorithm.

    Science.gov (United States)

    Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi

    2007-08-01

    To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.

  7. Use of two predictive algorithms of the world wide web for the identification of tumor-reactive T-cell epitopes.

    Science.gov (United States)

    Lu, J; Celis, E

    2000-09-15

    Tumor cells can be effectively recognized and eliminated by CTLs. One approach for the development of CTL-based cancer immunotherapy for solid tumors requires the use of the appropriate immunogenic peptide epitopes that are derived from defined tumor-associated antigens. Because CTL peptide epitopes are restricted to specific MHC alleles, to design immune therapies for the general population it is necessary to identify epitopes for the most commonly found human MHC alleles. The identification of such epitopes has been based on MHC-peptide-binding assays that are costly and labor-intensive. We report here the use of two computer-based prediction algorithms, which are readily available in the public domain (Internet), to identify HL4-B7-restricted CTL epitopes for carcinoembryonic antigen (CEA). These algorithms identified three candidate peptides that we studied for their capacity to induce CTL responses in vitro using lymphocytes from HLA-B7+ normal blood donors. The results show that one of these peptides, CEA9(632) (IPQQHTQVL) was efficient in the induction of primary CTL responses when dendritic cells were used as antigen-presenting cells. These CTLs were efficient in killing tumor cells that express HLA-B7 and produce CEA. The identification of this HLA-B7-restricted CTL epitope will be useful for the design of ethnically unbiased, widely applicable immunotherapies for common solid epithelial tumors expressing CEA. Moreover, our strategy of identifying MHC class I-restricted CTL epitopes without the need of peptide/HLA-binding assays provides a convenient and cost-saving alternative approach to previous methods.

  8. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    Science.gov (United States)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  9. Concurrent RFID/UID Implementation at Naval Surface Warfare Center, Crane Division, A Naval Postgraduate School Master of Business Administration Thesis Study in Item Unique Identification and Radio Frequency Identification

    OpenAIRE

    Obellos, Ernan S.; Lookabill, Ryan D.; Colleran, Travis

    2007-01-01

    As part of their Master of Business Administration thesis at Naval Postgraduate School (NPS) in Monterey, California, United States Navy Lieutenant Commanders Travis Colleran, Ryan Lookabill, and Ernan Obellos developed an implementation plan to apply Unique Identification (UID) and Radio Frequency Identification (RFID) concurrently at Naval Surface Warfare Center, Crane Division (NSWC Crane) in Crane, Indiana.

  10. 4D-QSAR investigation and pharmacophore identification of pyrrolo[2,1-c][1,4]benzodiazepines using electron conformational-genetic algorithm method.

    Science.gov (United States)

    Özalp, A; Yavuz, S Ç; Sabancı, N; Çopur, F; Kökbudak, Z; Sarıpınar, E

    2016-04-01

    In this paper, we present the results of pharmacophore identification and bioactivity prediction for pyrrolo[2,1-c][1,4]benzodiazepine derivatives using the electron conformational-genetic algorithm (EC-GA) method as 4D-QSAR analysis. Using the data obtained from quantum chemical calculations at PM3/HF level, the electron conformational matrices of congruity (ECMC) were constructed by EMRE software. The ECMC of the lowest energy conformer of the compound with the highest activity was chosen as the template and compared with the ECMCs of the lowest energy conformer of the other compounds within given tolerances to reveal the electron conformational submatrix of activity (ECSA, i.e. pharmacophore) by ECSP software. A descriptor pool was generated taking into account the obtained pharmacophore. To predict the theoretical activity and select the best subset of variables affecting bioactivities, the nonlinear least square regression method and genetic algorithm were performed. For four types of activity including the GI50, TGI, LC50 and IC50 of the pyrrolo[2,1-c][1,4] benzodiazepine series, the r(2)train, r(2)test and q(2) values were 0.858, 0.810, 0.771; 0.853, 0.848, 0.787; 0.703, 0.787, 0.600; and 0.776, 0.722, 0.687, respectively.

  11. Combining Fragment-Ion and Neutral-Loss Matching during Mass Spectral Library Searching: A New General Purpose Algorithm Applicable to Illicit Drug Identification.

    Science.gov (United States)

    Moorthy, Arun S; Wallace, William E; Kearsley, Anthony J; Tchekhovskoi, Dmitrii V; Stein, Stephen E

    2017-12-19

    A mass spectral library search algorithm that identifies compounds that differ from library compounds by a single "inert" structural component is described. This algorithm, the Hybrid Similarity Search, generates a similarity score based on matching both fragment ions and neutral losses. It employs the parameter DeltaMass, defined as the mass difference between query and library compounds, to shift neutral loss peaks in the library spectrum to match corresponding neutral loss peaks in the query spectrum. When the spectra being compared differ by a single structural feature, these matching neutral loss peaks should contain that structural feature. This method extends the scope of the library to include spectra of "nearest-neighbor" compounds that differ from library compounds by a single chemical moiety. Additionally, determination of the structural origin of the shifted peaks can aid in the determination of the chemical structure and fragmentation mechanism of the query compound. A variety of examples are presented, including the identification of designer drugs and chemical derivatives not present in the library.

  12. A Two-Step Strategy for System Identification of Civil Structures for Structural Health Monitoring Using Wavelet Transform and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Carlos Andres Perez-Ramirez

    2017-01-01

    Full Text Available Nowadays, the accurate identification of natural frequencies and damping ratios play an important role in smart civil engineering, since they can be used for seismic design, vibration control, and condition assessment, among others. To achieve it in practical way, it is required to instrument the structure and apply techniques which are able to deal with noise-corrupted and non-linear signals, as they are common features in real-life civil structures. In this article, a two-step strategy is proposed for performing accurate modal parameters identification in an automated manner. In the first step, it is obtained and decomposed the measured signals using the natural excitation technique and the synchrosqueezed wavelet transform, respectively. Then, the second step estimates the modal parameters by solving an optimization problem employing a genetic algorithm-based approach, where the micropopulation concept is used to improve the speed convergence as well as the accuracy of the estimated values. The accuracy and effectiveness of the proposal are tested using both the simulated response of a benchmark structure and the measurements of a real eight-story building. The obtained results show that the proposed strategy can estimate the modal parameters accurately, indicating than the proposal can be considered as an alternative to perform the abovementioned task.

  13. Muscle-tendon units localization and activation level analysis based on high-density surface EMG array and NMF algorithm

    Science.gov (United States)

    Huang, Chengjun; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-12-01

    Objective. Some skeletal muscles can be subdivided into smaller segments called muscle-tendon units (MTUs). The purpose of this paper is to propose a framework to locate the active region of the corresponding MTUs within a single skeletal muscle and to analyze the activation level varieties of different MTUs during a dynamic motion task. Approach. Biceps brachii and gastrocnemius were selected as targeted muscles and three dynamic motion tasks were designed and studied. Eight healthy male subjects participated in the data collection experiments, and 128-channel surface electromyographic (sEMG) signals were collected with a high-density sEMG electrode grid (a grid consists of 8 rows and 16 columns). Then the sEMG envelopes matrix was factorized into a matrix of weighting vectors and a matrix of time-varying coefficients by nonnegative matrix factorization algorithm. Main results. The experimental results demonstrated that the weightings vectors, which represent invariant pattern of muscle activity across all channels, could be used to estimate the location of MTUs and the time-varying coefficients could be used to depict the variation of MTUs activation level during dynamic motion task. Significance. The proposed method provides one way to analyze in-depth the functional state of MTUs during dynamic tasks and thus can be employed on multiple noteworthy sEMG-based applications such as muscle force estimation, muscle fatigue research and the control of myoelectric prostheses. This work was supported by the National Nature Science Foundation of China under Grant 61431017 and 61271138.

  14. Automatic identification of rockfalls and volcano-tectonic earthquakes at the Piton de la Fournaise volcano using a Random Forest algorithm

    Science.gov (United States)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Maggi, Alessia; Stumpf, André; Ferrazzini, Valérie

    2017-06-01

    Monitoring the endogenous seismicity of volcanoes helps to forecast eruptions and prevent their related risks, and also provides critical information on the eruptive processes. Due the high number of events recorded during pre-eruptive periods by the seismic monitoring networks, cataloging each event can be complex and time-consuming if done by human operators. Automatic seismic signal processing methods are thus essential to build consistent catalogs based on objective criteria. We evaluated the performance of the ;Random Forests; (RF) machine-learning algorithm for classifying seismic signals recorded at the Piton de la Fournaise volcano, La Réunion Island (France). We focused on the discrimination of the dominant event types (rockfalls and volcano-tectonic earthquakes) using over 19,000 events covering two time periods: 2009-2011 and 2014-2015. We parametrized the seismic signals using 60 attributes that were then given to RF algorithm. When the RF classifier was given enough training samples, its sensitivity (rate of good identification) exceeded 99%, and its performance remained high (above 90%) even with few training samples. The sensitivity collapsed when using an RF classifier trained with data from 2009 to 2011 to classify data from 2014 to 2015 catalog, because the physical characteristics of the rockfalls and hence their seismic signals had evolved between the two time-periods. The main attribute families (waveform, spectrum, spectrogram or polarization) were all found to be useful for event discrimination. Our work validates the performance of the RF algorithm and suggests it could be implemented at other volcanic observatories to perform automatic, near real-time, classification of seismic events.

  15. Inverse problem studies of biochemical systems with structure identification of S-systems by embedding training functions in a genetic algorithm.

    Science.gov (United States)

    Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D

    2016-05-01

    An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Identification of slip surface location by TLS-GPS datafor landslide mitigation case study: Ciloto-Puncak, West Java

    Energy Technology Data Exchange (ETDEWEB)

    Sadarviana, Vera, E-mail: vsadarviana@gmail.com; Hasanuddin, A. Z.; Joenil, G. K.; Irwan; Wijaya, Dudy; Ilman, H.; Agung, N.; Achmad, R. T.; Pangeran, C.; Martin, S.; Gamal, M. [Geodesy Research Group, Faculty of Earth Sciences and Technology, Bandung Institute of Technology, Jl. Ganesha 10, Bandung 40132, West Java (Indonesia); Santoso, Djoko [Geophysics Engineering Research Group, Faculty of Geoscience and Mineral Engineering, Bandung Institute of Technology, Jl. Ganesha 10, Bandung 40132, West Java (Indonesia)

    2015-04-24

    Landslide can prevented by understanding the direction of movement to the safety evacuation track or slip surface location to hold avalanches. Slip surface is separating between stable soil and unstable soil in the slope. The slip surface location gives information about stable material depth. The information can be utilize to mitigate technical step, such as pile installation to keep construction or settlement safe from avalanches.There are two kinds landslide indicators which are visualization and calculation. By visualization, landslide identified from soil crack or scarp. Scarp is a scar of exposed soil on the landslide. That identification can be done by Terrestrial Laser Scanner (TLS) Image. Shape of scarp shows type of slip surface, translation or rotational. By calculation, kinematic and dynamic mathematic model will give vector, velocity and acceleration of material movement. In this calculation need velocity trend line at GPS point from five GPS data campaign. From intersection of trend lines it will create curves or lines of slip surface location. The number of slip surface can be known from material movement direction in landslide zone.Ciloto landslide zone have complicated phenomenon because that zone have influence from many direction of ground water level pressure. The pressure is causes generating several slip surface in Ciloto zone. Types of Ciloto slip surface have mix between translational and rotational type.

  17. Identification of slip surface location by TLS-GPS datafor landslide mitigation case study: Ciloto-Puncak, West Java

    International Nuclear Information System (INIS)

    Sadarviana, Vera; Hasanuddin, A. Z.; Joenil, G. K.; Irwan; Wijaya, Dudy; Ilman, H.; Agung, N.; Achmad, R. T.; Pangeran, C.; Martin, S.; Gamal, M.; Santoso, Djoko

    2015-01-01

    Landslide can prevented by understanding the direction of movement to the safety evacuation track or slip surface location to hold avalanches. Slip surface is separating between stable soil and unstable soil in the slope. The slip surface location gives information about stable material depth. The information can be utilize to mitigate technical step, such as pile installation to keep construction or settlement safe from avalanches.There are two kinds landslide indicators which are visualization and calculation. By visualization, landslide identified from soil crack or scarp. Scarp is a scar of exposed soil on the landslide. That identification can be done by Terrestrial Laser Scanner (TLS) Image. Shape of scarp shows type of slip surface, translation or rotational. By calculation, kinematic and dynamic mathematic model will give vector, velocity and acceleration of material movement. In this calculation need velocity trend line at GPS point from five GPS data campaign. From intersection of trend lines it will create curves or lines of slip surface location. The number of slip surface can be known from material movement direction in landslide zone.Ciloto landslide zone have complicated phenomenon because that zone have influence from many direction of ground water level pressure. The pressure is causes generating several slip surface in Ciloto zone. Types of Ciloto slip surface have mix between translational and rotational type

  18. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    Science.gov (United States)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  19. Accuracy and Consistency of Grass Pollen Identification by Human Analysts Using Electron Micrographs of Surface Ornamentation

    Directory of Open Access Journals (Sweden)

    Luke Mander

    2014-08-01

    Full Text Available Premise of the study: Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification. Methods: We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images. Results: Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%. Discussion: The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias.

  20. Accuracy and consistency of grass pollen identification by human analysts using electron micrographs of surface ornamentation.

    Science.gov (United States)

    Mander, Luke; Baker, Sarah J; Belcher, Claire M; Haselhorst, Derek S; Rodriguez, Jacklyn; Thorn, Jessica L; Tiwari, Shivangi; Urrego, Dunia H; Wesseln, Cassandra J; Punyasena, Surangi W

    2014-08-01

    Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification. • We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images. • Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%. • The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias.

  1. Application of the nonlinear time series prediction method of genetic algorithm for forecasting surface wind of point station in the South China Sea with scatterometer observations

    International Nuclear Information System (INIS)

    Zhong Jian; Dong Gang; Sun Yimei; Zhang Zhaoyang; Wu Yuqin

    2016-01-01

    The present work reports the development of nonlinear time series prediction method of genetic algorithm (GA) with singular spectrum analysis (SSA) for forecasting the surface wind of a point station in the South China Sea (SCS) with scatterometer observations. Before the nonlinear technique GA is used for forecasting the time series of surface wind, the SSA is applied to reduce the noise. The surface wind speed and surface wind components from scatterometer observations at three locations in the SCS have been used to develop and test the technique. The predictions have been compared with persistence forecasts in terms of root mean square error. The predicted surface wind with GA and SSA made up to four days (longer for some point station) in advance have been found to be significantly superior to those made by persistence model. This method can serve as a cost-effective alternate prediction technique for forecasting surface wind of a point station in the SCS basin. (paper)

  2. To the bottom of the stop: calibration of bottom-quark jets identification algorithms and search for scalar top-quarks and dark matter with the Run I ATLAS data

    NARCIS (Netherlands)

    Pani, P.

    2014-01-01

    In the first part of this thesis, the results of a calibration of bottomquark jets identification algorithms are reported. The analysis is performed with 5 fb-1 of proton-proton collisions at 7 TeV centre-of-mass energy recorded by the ATLAS detector at the LHC. A b-jet enriched sample from fully

  3. Response to ``Comment on `Identification of low order manifolds: Validating the algorithm of Maas and Pope''' [Chaos 16, 048101 (2006)

    Science.gov (United States)

    Rhodes, Carl; Morari, Manfred; Wiggins, Stephen

    2006-12-01

    Flockerzi and Heineken [Chaos 16, 048101 (2006)] present two examples with the goal of elucidating issues related to the Maas and Pope method for identifying low dimensional "slow" manifolds in systems with a time-scale separation. The goal of their first example is to show that the result claimed by Rhodes et al. [Chaos 9, 108-123 (1999)] that the Maas and Pope algorithm identifies the slow invariant manifold in the situation in which there is finite time-scale separation is incorrect. We show that their arguments result from an incomplete understanding of the situation and that, in fact, their example supports, and is completely consistent with, the result in Rhodes et al.. Their second example claims to be a counterexample to a conjecture in Rhodes et al. that away from the slow manifold the criterion of Maas and Pope [Combust. Flame 88, 239-264 (1992)] will never be fulfilled. While this conjecture may indeed be false, we argue that it is not clear that the example presented by Flockerzi and Heineken is indeed a counterexample.

  4. Identification of a Threshold Value for the DEMATEL Method: Using the Maximum Mean De-Entropy Algorithm

    Science.gov (United States)

    Chung-Wei, Li; Gwo-Hshiung, Tzeng

    To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.

  5. Another turn of the screw in shaving Gram-positive bacteria: Optimization of proteomics surface protein identification in Streptococcus pneumoniae.

    Science.gov (United States)

    Olaya-Abril, Alfonso; Gómez-Gascón, Lidia; Jiménez-Munguía, Irene; Obando, Ignacio; Rodríguez-Ortega, Manuel J

    2012-06-27

    Bacterial surface proteins are of outmost importance as they play critical roles in the interaction between cells and their environment. In addition, they can be targets of either vaccines or antibodies. Proteomic analysis through "shaving" live cells with proteases has become a successful approach for a fast and reliable identification of surface proteins. However, this protocol has not been able to reach the goal of excluding cytoplasmic contamination, as cell lysis is an inherent process during culture and experimental manipulation. In this work, we carried out the optimization of the "shaving" strategy for the Gram-positive human pathogen Streptococcus pneumoniae, a bacterium highly susceptible to autolysis, and set up the conditions for maximizing the identification of surface proteins containing sorting or exporting signals, and for minimizing cytoplasmic contamination. We also demonstrate that cell lysis is an inherent process during culture and experimental manipulation, and that a low level of lysis is enough to contaminate a "surfome" preparation with peptides derived from cytoplasmic proteins. When the optimized conditions were applied to several clinical isolates, we found the majority of the proteins described to induce protection against pneumococcal infection. In addition, we found other proteins whose protection capacity has not been yet tested. In addition, we show the utility of this approach for providing antigens that can be used in serological tests for the diagnosis of pneumococcal disease. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Identification and dynamics of proteins adhering to the surface of medical silicones in vivo and in vitro.

    Science.gov (United States)

    Backovic, Aleksandar; Huang, Hong-Lei; Del Frari, Barbara; Piza, Hildegunde; Huber, Lukas A; Wick, Georg

    2007-01-01

    Silicone has been used in medical practice as a paradigmatic implant material for decades despite significant detrimental side effects. Our targeted proteomics approach was aimed at identification of the proteins adsorbed to the surface of silicone because they have been characterized as key components in the onset and perpetuation of local immune reactions to silicone. The composition of the proteinacious film, the dynamics of protein deposition, and protein modifications after adsorption were analyzed both in vivo and in vitro. Differential analysis of protein deposition was performed, followed by protein identification with mass spectrometry, database matching, and Western blots. Thus far, we have identified the 30 most abundant proteins deposited on the surface of silicone, the largest known inventory of such proteins so far. Structural and extracellular matrix proteins predominated, followed by mediators of host defense, metabolism, transport, and stress related proteins. In addition, several biochemical modifications of fibronectin, vitronectin, and heat shock protein 60 were detected. Our analyses also revealed previously undetected proteins deposited on the surface of silicone. As tentative initiators and/or modulators of the response to silicone, they are therefore valuable candidates for prognosis and therapy.

  7. Surface Enhanced Raman Spectroscopy for the Rapid Detection and Identification of Microbial Pathogens in Human Serum

    Science.gov (United States)

    2014-12-11

    FOR OFFICIAL USE ONLY 4 ABBREVIATIONS %CV Coefficient of variation AgNR Silver nanorod A. baumannii Acinetobacter baumannii ADP...and 1 mm depth. Bacterial culture and cell count determination Bacterial species of Acinetobacter baumannii (A. baumannii , ST-3), Escherichia coli...identification of bacteria in pooled human sera. Methods: Species of Acenitobacter baumannii , Escherichia coli, Klebsiella pneumoniae, Staphylococcus

  8. Identification of air and sea-surface targets with a laser range profiler

    NARCIS (Netherlands)

    Heuvel, J.C. van den; Schoemaker, R.M.; Schleijpen, H.M.A.

    2009-01-01

    Current coastal operations have to deal with threats at short range in complex environments with both neutral and hostile targets. There is a need for fast identification, which is possible with a laser range profiler. A number of field trials have been conducted to validate the concept of

  9. An algorithm for hyperspectral remote sensing of aerosols: 2. Information content analysis for aerosol parameters and principal components of surface spectra

    Science.gov (United States)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.

    2017-05-01

    This paper describes the second part of a series of investigation to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from the future hyperspectral and geostationary satellite sensors such as Tropospheric Emissions: Monitoring of POllution (TEMPO). The information content in these hyperspectral measurements is analyzed for 6 principal components (PCs) of surface spectra and a total of 14 aerosol parameters that describe the columnar aerosol volume Vtotal, fine-mode aerosol volume fraction, and the size distribution and wavelength-dependent index of refraction in both coarse and fine mode aerosols. Forward simulations of atmospheric radiative transfer are conducted for 5 surface types (green vegetation, bare soil, rangeland, concrete and mixed surface case) and a wide range of aerosol mixtures. It is shown that the PCs of surface spectra in the atmospheric window channel could be derived from the top-of-the-atmosphere reflectance in the conditions of low aerosol optical depth (AOD ≤ 0.2 at 550 nm), with a relative error of 1%. With degree freedom for signal analysis and the sequential forward selection method, the common bands for different aerosol mixture types and surface types can be selected for aerosol retrieval. The first 20% of our selected bands accounts for more than 90% of information content for aerosols, and only 4 PCs are needed to reconstruct surface reflectance. However, the information content in these common bands from each TEMPO individual observation is insufficient for the simultaneous retrieval of surface's PC weight coefficients and multiple aerosol parameters (other than Vtotal). In contrast, with multiple observations for the same location from TEMPO in multiple consecutive days, 1-3 additional aerosol parameters could be retrieved. Consequently, a self-adjustable aerosol retrieval algorithm to account for surface types, AOD conditions, and multiple-consecutive observations is recommended to derive

  10. A MATLAB-based graphical user interface for the identification of muscular activations from surface electromyography signals.

    Science.gov (United States)

    Mengarelli, Alessandro; Cardarelli, Stefano; Verdini, Federica; Burattini, Laura; Fioretti, Sandro; Di Nardo, Francesco

    2016-08-01

    In this paper a graphical user interface (GUI) built in MATLAB® environment is presented. This interactive tool has been developed for the analysis of superficial electromyography (sEMG) signals and in particular for the assessment of the muscle activation time intervals. After the signal import, the tool performs a first analysis in a totally user independent way, providing a reliable computation of the muscular activation sequences. Furthermore, the user has the opportunity to modify each parameter of the on/off identification algorithm implemented in the presented tool. The presence of an user-friendly GUI allows the immediate evaluation of the effects that the modification of every single parameter has on the activation intervals recognition, through the real-time updating and visualization of the muscular activation/deactivation sequences. The possibility to accept the initial signal analysis or to modify the on/off identification with respect to each considered signal, with a real-time visual feedback, makes this GUI-based tool a valuable instrument in clinical, research applications and also in an educational perspective.

  11. An Efficient Vector-Raster Overlay Algorithm for High-Accuracy and High-Efficiency Surface Area Calculations of Irregularly Shaped Land Use Patches

    Directory of Open Access Journals (Sweden)

    Peng Xie

    2017-05-01

    Full Text Available The Earth’s surface is uneven, and conventional area calculation methods are based on the assumption that the projection plane area can be obtained without considering the actual undulation of the Earth’s surface and by simplifying the Earth’s shape to be a standard ellipsoid. However, the true surface area is important for investigating and evaluating land resources. In this study, the authors propose a new method based on an efficient vector-raster overlay algorithm (VROA-based method to calculate the surface areas of irregularly shaped land use patches. In this method, a surface area raster file is first generated based on the raster-based digital elevation model (raster-based DEM. Then, a vector-raster overlay algorithm (VROA is used that considers the precise clipping of raster cells using the vector polygon boundary. Xiantao City, Luotian County, and the Shennongjia Forestry District, which are representative of a plain landform, a hilly topography, and a mountain landscape, respectively, are selected to calculate the surface area. Compared with a traditional method based on triangulated irregular networks (TIN-based method, our method significantly reduces the processing time. In addition, our method effectively improves the accuracy compared with another traditional method based on raster-based DEM (raster-based method. Therefore, the method satisfies the requirements of large-scale engineering applications.

  12. Comparison of Satellite Reflectance Algorithms for Estimating Phycocyanin Values and Cyanobacterial Total Biovolume in a Temperate Reservoir Using Coincident Hyperspectral Aircraft Imagery and Dense Coincident Surface Observations

    Directory of Open Access Journals (Sweden)

    Richard Beck

    2017-05-01

    Full Text Available We analyzed 27 established and new simple and therefore perhaps portable satellite phycocyanin pigment reflectance algorithms for estimating cyanobacterial values in a temperate 8.9 km2 reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense coincident water surface observations collected from 44 sites within 1 h of image acquisition. The algorithms were adapted to real Compact Airborne Spectrographic Imager (CASI, synthetic WorldView-2, Sentinel-2, Landsat-8, MODIS and Sentinel-3/MERIS/OLCI imagery resulting in 184 variants and corresponding image products. Image products were compared to the cyanobacterial coincident surface observation measurements to identify groups of promising algorithms for operational algal bloom monitoring. Several of the algorithms were found useful for estimating phycocyanin values with each sensor type except MODIS in this small lake. In situ phycocyanin measurements correlated strongly (r2 = 0.757 with cyanobacterial sum of total biovolume (CSTB allowing us to estimate both phycocyanin values and CSTB for all of the satellites considered except MODIS in this situation.

  13. Raman and surface-enhanced Raman spectroscopy of amino acids and nucleotide bases for target bacterial vibrational mode identification

    Science.gov (United States)

    Guicheteau, Jason; Argue, Leanne; Hyre, Aaron; Jacobson, Michele; Christesen, Steven D.

    2006-05-01

    Raman and surface-enhanced Raman spectroscopy (SERS) studies of bacteria have reported a wide range of vibrational mode assignments associated with biological material. We present Raman and SER spectra of the amino acids phenylalanine, tyrosine, tryptophan, glutamine, cysteine, alanine, proline, methionine, asparagine, threonine, valine, glycine, serine, leucine, isoleucine, aspartic acid and glutamic acid and the nucleic acid bases adenosine, guanosine, thymidine, and uridine to better characterize biological vibrational mode assignments for bacterial target identification. We also report spectra of the bacteria Bacillus globigii, Pantoea agglomerans, and Yersinia rhodei along with band assignments determined from the reference spectra obtained.

  14. Printed high-frequency RF identification antenna on ultrathin polymer film by simple production process for soft-surface adhesive device

    Science.gov (United States)

    Hayata, Hiroki; Okamoto, Marin; Takeoka, Shinji; Iwase, Eiji; Fujie, Toshinori; Iwata, Hiroyasu

    2017-05-01

    In this paper, we present a simple method for manufacturing electronic devices using ultrathin polymer films, and develop a high-frequency RF identification. To expand the market for flexible devices, it is important to enhance their adhesiveness and conformability to surfaces, to simplify their fabrication, and to reduce their cost. We developed a method to design an antenna for use on an operable RF identification whose wiring was subjected to commercially available inkjet or simple screen printing, and successfully fabricated the RF identification. By using ultrathin films made of polystyrene-block-polybutadiene-block-polystyrene (SBS) as substrates — less than 750 nm — the films could be attached to various surfaces, including soft surfaces, by van der Waals force and without using glue. We succeeded in the simple fabrication of an ultrathin RF identification including a commercial or simple printing process.

  15. Identification of new candidate drugs for lung cancer using chemical-chemical interactions, chemical-protein interactions and a K-means clustering algorithm.

    Science.gov (United States)

    Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong

    2016-01-01

    Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer.

  16. Identification of unknown contaminants in surface water : combination of analytical and computer-based approaches

    OpenAIRE

    Hu, Meng

    2017-01-01

    Thousands of different chemicals are used in our daily life for household, industry, agriculture and medical purpose, and many of them are discharged into water bodies by direct or indirect ways. Thus, monitoring and identification of organic pollutants in aquatic ecosystem is one of the most essential concerns with respects to human health and aquatic life. Althrough liquid chromatography coupled to high resolution mass spectrometry (LC-HRMS) has made huge advancements in recent years, allow...

  17. Effect of time sequences in scanning algorithms on the surface temperature during corneal laser surgery with high-repetition-rate excimer laser.

    Science.gov (United States)

    Mrochen, Michael; Schelling, Urs; Wuellner, Christian; Donitzky, Christof

    2009-04-01

    To investigate the influence of temporal and spatial spot sequences on the ocular surface temperature increase during corneal laser surgery with a high-repetition-rate excimer laser. Institute for Refractive and Ophthalmic Surgery, Zurich, Switzerland, and WaveLight AG, Erlangen, Germany. An argon-fluoride excimer laser system working at a repetition rate of 1050 Hz was used to photoablate bovine corneas with various myopic, hyperopic, and phototherapeutic ablation profiles. The temporal distribution of ablation profiles was modified by 4 spot sequences: line, circumferential, random, and an optimized scan algorithm. The increase in ocular surface temperature was measured using an infrared camera. The maximum and mean ocular surface temperature increases depended primarily on the spatial and temporal distribution of the spots during photoablation and the amount of refractive correction. The highest temperature increases were with the line and circumferential scan sequences. Significant lower temperature increases were found with the optimized and random scan algorithms. High-repetition-rate excimer laser systems require spot sequences with optimized temporal and spatial spot distribution to minimize the increase in ocular surface temperature. An ocular surface temperature increase will always occur depending on the amount of refractive correction, the type of ablation profile, the radiant exposure, and the repetition rate of the laser system.

  18. Optimal number and location of heaters in 2-D radiant enclosures composed of specular and diffuse surfaces using micro-genetic algorithm

    International Nuclear Information System (INIS)

    Safavinejad, A.; Mansouri, S.H.; Sakurai, A.; Maruyama, S.

    2009-01-01

    In this study, a combinatorial optimization methodology has been presented for determining the optimal number and location of equally powered heaters over some parts of the boundary, called the heater surface, to satisfy the desired heat flux and temperature profiles over the design surface while keeping the total heaters power constant but floating the number of heaters. In a typical enclosure, candidate locations were numerous for placing the heaters. The optimal number and location could be found by checking among all the possible combinations of heater power ranges and locations on the heater surface. The possibility of checking only a small portion of the total search space was increasingly desirable for finding an overall optimal solution. Micro-genetic algorithm was a candidate method which displayed a significant potential in achieving that task. Micro-genetic algorithm was used to minimize an objective function which was expressed by the sum of square errors between estimated and desired heat fluxes on the design surface. Radiation element method by ray emission model (REM 2 ) was used to calculate the radiative heat flux on the design surface. It enabled us to handle the effects of specular surfaces and blockage radiation due to enclosure geometry. The capabilities of this methodology were demonstrated by finding the optimal number and position of heaters in two irregular enclosures. The effects of refractory surface characteristics (i.e., diffuse and/or specular) on the optimal solution have been studied in detail. The results show that the refractory surface characteristics have profound effects on the optimal number and location of heaters

  19. Identification of Streptococcus equi ssp. zooepidemicus surface associated proteins by enzymatic shaving.

    Science.gov (United States)

    Wei, Zigong; Fu, Qiang; Liu, Xiaohong; Xiao, Pingping; Lu, Zhaohui; Chen, Yaosheng

    2012-10-12

    Streptococcus equi ssp. zooepidemicus (Streptococcus zooepidemicus, SEZ) is responsible for a wide variety of infections in many species. Attempts to control the infection caused by this agent are hampered by a lack of effective vaccines and useful diagnostic kits. Surface proteins of bacterial species are usually involved in interaction with host and hopefully act as biomarkers for serodiagnosis and subunit vaccine components. In this study, the surface proteins of SEZ C55138 strain were systematically identified by surface shaving with trypsin and a total of 20 surface associated proteins were found. Further analysis of five selected novel proteins (SzM, FBP, SAP, CSP and 5'-Nu) revealed that they all expressed in vivo and their recombinant derived proteins could be reactive with convalescent sera. These identified immunogenic surface proteins have potential as SEZ vaccine candidates and diagnostic markers. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. High colored dissolved organic matter (CDOM) absorption in surface waters of the central-eastern Arctic Ocean: Implications for biogeochemistry and ocean color algorithms.

    Science.gov (United States)

    Gonçalves-Araujo, Rafael; Rabe, Benjamin; Peeken, Ilka; Bracher, Astrid

    2018-01-01

    As consequences of global warming sea-ice shrinking, permafrost thawing and changes in fresh water and terrestrial material export have already been reported in the Arctic environment. These processes impact light penetration and primary production. To reach a better understanding of the current status and to provide accurate forecasts Arctic biogeochemical and physical parameters need to be extensively monitored. In this sense, bio-optical properties are useful to be measured due to the applicability of optical instrumentation to autonomous platforms, including satellites. This study characterizes the non-water absorbers and their coupling to hydrographic conditions in the poorly sampled surface waters of the central and eastern Arctic Ocean. Over the entire sampled area colored dissolved organic matter (CDOM) dominates the light absorption in surface waters. The distribution of CDOM, phytoplankton and non-algal particles absorption reproduces the hydrographic variability in this region of the Arctic Ocean which suggests a subdivision into five major bio-optical provinces: Laptev Sea Shelf, Laptev Sea, Central Arctic/Transpolar Drift, Beaufort Gyre and Eurasian/Nansen Basin. Evaluating ocean color algorithms commonly applied in the Arctic Ocean shows that global and regionally tuned empirical algorithms provide poor chlorophyll-a (Chl-a) estimates. The semi-analytical algorithms Generalized Inherent Optical Property model (GIOP) and Garver-Siegel-Maritorena (GSM), on the other hand, provide robust estimates of Chl-a and absorption of colored matter. Applying GSM with modifications proposed for the western Arctic Ocean produced reliable information on the absorption by colored matter, and specifically by CDOM. These findings highlight that only semi-analytical ocean color algorithms are able to identify with low uncertainty the distribution of the different optical water constituents in these high CDOM absorbing waters. In addition, a clustering of the Arctic Ocean

  1. Process Parameter Identification in Thin Film Flows Driven by a Stretching Surface

    Directory of Open Access Journals (Sweden)

    Satyananda Panda

    2014-01-01

    Full Text Available The flow of a thin liquid film over a heated stretching surface is considered in this study. Due to a potential nonuniform temperature distribution on the stretching sheet, a temperature gradient occurs in the fluid which produces surface tension gradient at the free surface of the thin film. As a result, the free surface deforms and these deformations are advected by the flow in the stretching direction. This work focuses on the inverse problem of reconstructing the sheet temperature distribution and the sheet stretch rate from observed free surface variations. This work builds on the analysis of Santra and Dandapat (2009 who, based on the long-wave expansion of the Navier-Stokes equations, formulate a partial differential equation which describes the evolution of the thickness of a film over a nonisothermal stretched surface. In this work, we show that after algebraic manipulation of a discrete form of the governing equations, it is possible to reconstruct either the unknown temperature field on the sheet and hence the resulting heat transfer or the stretching rate of the underlying surface. We illustrate the proposed methodology and test its applicability on a range of test problems.

  2. A novel algorithm for delineating wetland depressions and mapping surface hydrologic flow pathways using LiDAR data

    Science.gov (United States)

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In re...

  3. Identification of malaria parasite-infected red blood cell surface aptamers by inertial microfluidic SELEX (I-SELEX)

    Science.gov (United States)

    Birch, Christina M.; Hou, Han Wei; Han, Jongyoon; Niles, Jacquin C.

    2015-07-01

    Plasmodium falciparum malaria parasites invade and remodel human red blood cells (RBCs) by trafficking parasite-synthesized proteins to the RBC surface. While these proteins mediate interactions with host cells that contribute to disease pathogenesis, the infected RBC surface proteome remains poorly characterized. Here we use a novel strategy (I-SELEX) to discover high affinity aptamers that selectively recognize distinct epitopes uniquely present on parasite-infected RBCs. Based on inertial focusing in spiral microfluidic channels, I-SELEX enables stringent partitioning of cells (efficiency ≥ 106) from unbound oligonucleotides at high volume throughput (~2 × 106 cells min-1). Using an RBC model displaying a single, non-native antigen and live malaria parasite-infected RBCs as targets, we establish suitability of this strategy for de novo aptamer selections. We demonstrate recovery of a diverse set of aptamers that recognize distinct, surface-displayed epitopes on parasite-infected RBCs with nanomolar affinity, including an aptamer against the protein responsible for placental sequestration, var2CSA. These findings validate I-SELEX as a broadly applicable aptamer discovery platform that enables identification of new reagents for mapping the parasite-infected RBC surface proteome at higher molecular resolution to potentially contribute to malaria diagnostics, therapeutics and vaccine efforts.

  4. Calculating all local minima on liquidus surfaces using the FactSage software and databases and the Mesh Adaptive Direct Search algorithm

    International Nuclear Information System (INIS)

    Gheribi, Aimen E.; Robelin, Christian; Digabel, Sebastien Le; Audet, Charles; Pelton, Arthur D.

    2011-01-01

    Highlights: → Systematic search of low melting temperatures in multicomponent systems. → Calculation of eutectic in multicomponent systems. → The FactSage software and the direct search algorithm are used simultaneously. - Abstract: It is often of interest, for a multicomponent system, to identify the low melting compositions at which local minima of the liquidus surface occur. The experimental determination of these minima can be very time-consuming. An alternative is to employ the CALPHAD approach using evaluated thermodynamic databases containing optimized model parameters giving the thermodynamic properties of all phases as functions of composition and temperature. Liquidus temperatures are then calculated by Gibbs free energy minimization algorithms which access the databases. Several such large databases for many multicomponent systems have been developed over the last 40 years, and calculated liquidus temperatures are generally quite accurate. In principle, one could then search for local liquidus minima by simply calculating liquidus temperatures over a compositional grid. In practice, such an approach is prohibitively time-consuming for all but the simplest systems since the required number of grid points is extremely large. In the present article, the FactSage database computing system is coupled with the powerful Mesh Adaptive Direct Search (MADS) algorithm in order to search for and calculate automatically all liquidus minima in a multicomponent system. Sample calculations for a 4-component oxide system, a 7-component chloride system, and a 9-component ferrous alloy system are presented. It is shown that the algorithm is robust and rapid.

  5. Sensitivity of Global Sea-Air CO2 Flux to Gas Transfer Algorithms, Climatological Wind Speeds, and Variability of Sea Surface Temperature and Salinity

    Science.gov (United States)

    McClain, Charles R.; Signorini, Sergio

    2002-01-01

    Sensitivity analyses of sea-air CO2 flux to gas transfer algorithms, climatological wind speeds, sea surface temperatures (SST) and salinity (SSS) were conducted for the global oceans and selected regional domains. Large uncertainties in the global sea-air flux estimates are identified due to different gas transfer algorithms, global climatological wind speeds, and seasonal SST and SSS data. The global sea-air flux ranges from -0.57 to -2.27 Gt/yr, depending on the combination of gas transfer algorithms and global climatological wind speeds used. Different combinations of SST and SSS global fields resulted in changes as large as 35% on the oceans global sea-air flux. An error as small as plus or minus 0.2 in SSS translates into a plus or minus 43% deviation on the mean global CO2 flux. This result emphasizes the need for highly accurate satellite SSS observations for the development of remote sensing sea-air flux algorithms.

  6. Upper Crustal Shear Structure of NE Wyoming Inverted by Regional Surface Waves From Mining Explosions-Comparison of Niching Genetic Algorithms and Least-Squares Inversion

    Science.gov (United States)

    Zhou, R.; Stump, B. W.

    2001-12-01

    Surface-wave dispersion analysis of regional seismograms from mining explosion is used to extract shallow subsurface structural models. Seismograms along a number of azimuths were recorded at near-regional distances from mining explosions in Northeast Wyoming. The group velocities of fundamental mode Rayleigh wave were determined by using the Multiple Filter Analysis (MFA) and refined by Phase Matched Filtering (PMF) technique. The surface wave dispersion curves covered the period range of 2 to 12 sec and the group-velocities range from 1.3 to 2.9 km/sec. Besides least-squares inversion, a niching genetic algorithm (NGA) was introduced for crustal shear-wave velocity inversion. Niching methods are techniques specifically to maintain diversity and promote the formation and maintenance of stable sub-populations in the tradition genetic algorithm. This methodology identifies multiple candidate solutions when applied to both multimodal optimization and classification problems. Considering the nonuniqueness of inversion problem, the capacity of NGA is explored to retrieve classes of S-wave velocity structural profiles from the dispersion curves. Synthetic tests illustrate the range of nonuniqueness in linear surface wave inversion problems. Application of this new technique to regional surface wave observations from the Powder River Basin provides classes of models from which the one that is most consistent with geologic constraints can be chosen.

  7. Non-invasive identification of metal-oxalate complexes on polychrome artwork surfaces by reflection mid-infrared spectroscopy.

    Science.gov (United States)

    Monico, Letizia; Rosi, Francesca; Miliani, Costanza; Daveri, Alessia; Brunetti, Brunetto G

    2013-12-01

    In this work a reflection mid-infrared spectroscopy study of twelve metal-oxalate complexes, of interest in art conservation science as alteration compounds, was performed. Spectra of the reference materials highlighted the presence of derivative-like and/or inverted features for the fundamental vibrational modes as result of the main contribution from the surface component of the reflected light. In order to provide insights in the interpretation of theses spectral distortions, reflection spectra were compared with conventional transmission ones. The Kramers-Kronig (KK) algorithm, employed to correct for the surface reflection distortions, worked properly only for the derivative-like bands. Therefore, to pay attention to the use of this algorithm when interpreting the reflection spectra is recommended. The outcome of this investigation was exploited to discriminate among different oxalates on thirteen polychrome artworks analyzed in situ by reflection mid-infrared spectroscopy. The visualization of the νs(CO) modes (1400-1200 cm(-1)) and low wavenumber bands (below 900 cm(-1)) in the raw reflection profiles allowed Ca, Cu and Zn oxalates to be identified. Further information about the speciation of different hydration forms of calcium oxalates were obtained by using the KK transform. The work proves reflection mid-infrared spectroscopy to be a reliable and sensitive spectro-analytical method for identifying and mapping different metal-oxalate alteration compounds on the surface of artworks, thus providing conservation scientists with a non-invasive tool to obtain information on the state of conservation and causes of alteration of artworks. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Morphological and Molecular Identification of Acanthamoeba Spp from Surface Waters in Birjand, Iran, During 2014-2015

    Directory of Open Access Journals (Sweden)

    Mahmoodreza Behravan

    2016-04-01

    Full Text Available Background & Aims of the Study: Free-living amoebae (FLA are opportunistic and ubiquitous protozoa that are widely found in various environmental sources. They are known to cause serious human infections including a fatal encephalitis, a blinding keratitis, and pneumonia. So, due to their medical importance, the identification of free living amoeba in water resources, as a source of human infection, is necessary. The objective of this study was to isolate the Acanthamoebaspp from the surface waters of Birjand, Iran, during 2014-2015 by Morphological and molecular method. Materials and Methods:  In a cross-sectional study, 50 samples were collected from different localities of Birjand city including the surface waters, pools and fountains in parks,squares and water stations from the October 2014 to the January 2015.Each sample was filtered through a nitrocellulose membrane filters and cultured on non-nutrient agar (NNA with Escherichia coli suspension and incubated for 1 week to 2 months at room temperature.The plates were examined by the microscopy to morphologically identify Acanthamoeba species. Following DNA extraction, PCR specific primers was used to confirm the identification morphologically. Results:  Out of 50 water samples, 19 (38% were positive for Acanthamoebatrophozoites and cysts according to the morphological criteria. In addition, Acanthamoebaspp was identified by PCR method, using genus specific primers pairs in 15 (78.9% cases of positive cultures, showing anearly 500bp band. Conclusion: According to the prevalent of Acanthamoebaspp in the surface stagnant waters of Birjand, more attention to the potential role of such waters in transmission of infection by the regional clinicians and health practitioners is necessary.

  9. Improved Methodology for Surface and Atmospheric Soundings, Error Estimates, and Quality Control Procedures: the AIRS Science Team Version-6 Retrieval Algorithm

    Science.gov (United States)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2014-01-01

    The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.

  10. Identification and characterization of surface antigens in parasites, using radiolabelling techniques

    International Nuclear Information System (INIS)

    Ramasamy, R.

    1982-04-01

    Surface proteins of Schistosoma sp and Leishmania sp were studied using 125-Iodine as tracer. The surface proteins were labelled by the Lactoperoxidase method and the proteins then separated using SDS PAG electrophoresis and autoradiography. The possible immunogens were then separated using immunoprecipitation and Fluorescent Antibody techniques using sera from patients or from artificially immunized rabbits. Four common antigens were identified from the surfaces of male and female adult worms, cercariae and schistosomulae of S.mansoni. These antigens, which had molecular weights of 150,000, 78,000, 45,000, and 22,000 were also isolated from the surfaces of S.haematobium adults. The surface antigens on promastigotes of a Kenyan strain of Leishmania donovani were separated into three protein antigens with molecular weights of 66,000, 59,000 and 43,000 respectively. The 59,000 molecular weight antigen was a glycoprotein and was common to promastigotes of an American and Indian strain of L.donovani and to L.braziliensis mexicana. None of the isolated antigens have been shown to have a protective effect when vaccinated into mice, but the study illustrates the value of radionuclide tracers in the unravelling of the mosaic of antigens which parasites possess

  11. Identification of soil erosion land surfaces by Landsat data analysis and processing

    International Nuclear Information System (INIS)

    Lo Curzio, S.

    2009-01-01

    In this paper, we outline the typical relationship between the spectral reflectance of aileron's on newly-formed land surfaces and the geo morphological features of the land surfaces at issue. These latter represent the products of superficial erosional processes due to the action of the gravity and/or water; thus, such land surfaces are highly representative of the strong soil degradation occurring in a wide area located on the boundary between Molise and Puglia regions (Southern Italy). The results of this study have been reported on thematic maps; on such maps, the detected erosional land surfaces have been mapped on the basis of their typical spectral signature. The study has been performed using Landsat satellite imagery data which have been then validated by means of field survey data. The satellite data have been processed using remote sensing techniques, such as: false colour composite, contrast stretching, principal component analysis and decorrelation stretching. The study has permitted to produce, in a relatively short time and at low expense, a map of the eroded land surfaces. Such a result represents a first and fundamental step in evaluating and monitoring the erosional processes in the study area [it

  12. Implementation of Freeman-Wimley prediction algorithm in a web-based application for in silico identification of beta-barrel membrane proteins

    Directory of Open Access Journals (Sweden)

    José Antonio Agüero-Fernández

    2015-11-01

    Full Text Available Beta-barrel type proteins play an important role in both, human and veterinary medicine. In particular, their localization on the bacterial surface, and their involvement in virulence mechanisms of pathogens, have turned them into an interesting target in studies to search for vaccine candidates. Recently, Freeman and Wimley developed a prediction algorithm based on the physicochemical properties of transmembrane beta-barrels proteins (TMBBs. Based on that algorithm, and using Grails, a web-based application was implemented. This system, named Beta Predictor, is capable of processing from one protein sequence to complete predicted proteomes up to 10000 proteins with a runtime of about 0.019 seconds per 500-residue protein, and it allows graphical analyses for each protein. The application was evaluated with a validation set of 535 non-redundant proteins, 102 TMBBs and 433 non-TMBBs. The sensitivity, specificity, Matthews correlation coefficient, positive predictive value and accuracy were calculated, being 85.29%, 95.15%, 78.72%, 80.56% and 93.27%, respectively. The performance of this system was compared with TMBBs predictors, BOMP and TMBHunt, using the same validation set. Taking into account the order mentioned above, the following results were obtained: 76.47%, 99.31%, 83.05%, 96.30% and 94.95% for BOMP, and 78.43%, 92.38%, 67.90%, 70.17% and 89.78% for TMBHunt. Beta Predictor was outperformed by BOMP but the latter showed better behavior than TMBHunt

  13. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  14. Identification and characterization of the surface-layer protein of Clostridium tetani.

    Science.gov (United States)

    Qazi, Omar; Brailsford, Alan; Wright, Anne; Faraar, Jeremy; Campbell, Jim; Fairweather, Neil

    2007-09-01

    Many bacterial species produce a paracrystalline layer, the surface layer, which completely surrounds the exterior of the cell. In some bacteria, the surface layer is implicated in pathogenesis. Two proteins present in cell wall extracts from Clostridium tetani have been investigated and identified one of these has been unambiguously as the surface-layer protein (SLP). The gene, slpA, has been located in the genome of C. tetani E88 that encodes the SLP. The molecular mass of the protein as determined by sodium dodecyl sulfate-polyacrylamide gel electrophoresis is considerably larger than that predicted from the gene; however the protein does not appear to be glycosylated. Furthermore, analysis of five C. tetani strains, including three recent clinical isolates, shows considerable variation in the sizes of the SLP.

  15. Bulk and surface event identification in p-type germanium detectors

    Science.gov (United States)

    Yang, L. T.; Li, H. B.; Wong, H. T.; Agartioglu, M.; Chen, J. H.; Jia, L. P.; Jiang, H.; Li, J.; Lin, F. K.; Lin, S. T.; Liu, S. K.; Ma, J. L.; Sevda, B.; Sharma, V.; Singh, L.; Singh, M. K.; Singh, M. K.; Soma, A. K.; Sonay, A.; Yang, S. W.; Wang, L.; Wang, Q.; Yue, Q.; Zhao, W.

    2018-04-01

    The p-type point-contact germanium detectors have been adopted for light dark matter WIMP searches and the studies of low energy neutrino physics. These detectors exhibit anomalous behavior to events located at the surface layer. The previous spectral shape method to identify these surface events from the bulk signals relies on spectral shape assumptions and the use of external calibration sources. We report an improved method in separating them by taking the ratios among different categories of in situ event samples as calibration sources. Data from CDEX-1 and TEXONO experiments are re-examined using the ratio method. Results are shown to be consistent with the spectral shape method.

  16. Atomistic modeling of metal surfaces under electric fields: direct coupling of electric fields to a molecular dynamics algorithm

    CERN Document Server

    Djurabekova, Flyura; Pohjonen, Aarne; Nordlund, Kai

    2011-01-01

    The effect of electric fields on metal surfaces is fairly well studied, resulting in numerous analytical models developed to understand the mechanisms of ionization of surface atoms observed at very high electric fields, as well as the general behavior of a metal surface in this condition. However, the derivation of analytical models does not include explicitly the structural properties of metals, missing the link between the instantaneous effects owing to the applied field and the consequent response observed in the metal surface as a result of an extended application of an electric field. In the present work, we have developed a concurrent electrodynamic–molecular dynamic model for the dynamical simulation of an electric-field effect and subsequent modification of a metal surface in the framework of an atomistic molecular dynamics (MD) approach. The partial charge induced on the surface atoms by the electric field is assessed by applying the classical Gauss law. The electric forces acting on the partially...

  17. Identification of related multilingual documents using ant clustering algorithms Identificación de documentos multilingües relacionados mediante algoritmos de clustering de hormigas

    Directory of Open Access Journals (Sweden)

    Ángel Cobo

    2011-12-01

    Full Text Available This paper presents a document representation strategy and a bio-inspired algorithm to cluster multilingual collections of documents in the field of economics and business. The proposed approach allows the user to identify groups of related economics documents written in Spanish and English using techniques inspired on clustering and sorting behaviours observed in some types of ants. In order to obtain a language independent vector representation of each document two multilingual resources are used: an economic glossary and a thesaurus. Each document is represented using four feature vectors: words, proper names, economic terms in the glossary and thesaurus descriptors. The proper name identification, word extraction and lemmatization are performed using specific tools. The tf-idf scheme is used to measure the importance of each feature in the document, and a convex linear combination of angular separations between feature vectors is used as similarity measure of documents. The paper shows experimental results of the application of the proposed algorithm in a Spanish-English corpus of research papers in economics and management areas. The results demonstrate the usefulness and effectiveness of the ant clustering algorithm and the proposed representation scheme.Este artículo presenta una estrategia de representación documental y un algoritmo bioinspirado para realizar procesos de agrupamiento en colecciones multilingües de documentos en las áreas de la economía y la empresa. El enfoque propuesto permite al usuario identificar grupos de documentos económicos relacionados escritos en español o inglés usando técnicas inspiradas en comportamientos de organización y agrupamiento de objetos observados en algunos tipos de hormigas. Para conseguir una representación vectorial de cada documento independiente del idioma, se han utilizado dos recursos lingüísticos: un glosario económico y un tesauro. Cada documento es representado usando

  18. Weeping Glass: The Identification of Ionic Species on the Surface of Vessel Glass Using Ion Chromatography

    NARCIS (Netherlands)

    Verhaar, G.; van Bommel, M.R.; Tennent, N.H.; Roemich, H.; Fair, L.

    2016-01-01

    Aqueous films on the surface of unstable vessel glass were analysed. Five cation and eight anion species from eleven glass items in the Rijksmuseum, Amsterdam, the Hamburg Museum and the Corning Museum of Glass have been quantified by ion chromatography. Sodium, potassium, magnesium and calcium

  19. Identification of phagocytosis-associated surface proteins of macrophages by two-dimensional gel electrophoresis.

    Science.gov (United States)

    Howard, F D; Petty, H R; McConnell, H M

    1982-02-01

    Two-dimensional PAGE (P. Z. O'Farrell, H. M. Goodman, and P. H. O'Farrell. 1977. Cell. 12:1133-1142) has been employed to assess the effects of antibody-dependent phagocytosis on the cell surface protein composition of RAW264 macrophages. Unilamellar phospholipid vesicles containing 1% dinitrophenyl-aminocaproyl-phosphatidylethanolamine (DNP-cap-PE) were used as the target particle. Macrophages were exposed to anti-DNP antibody alone, vesicles alone, or vesicles in the presence of antibody for 1 h at 37 degrees C. Cell surface proteins were then labeled by lactoperoxidase-catalyzed radioiodination at 4 degrees C. After detergent solubilization, membrane proteins were analyzed by two-dimensional gel electrophoresis. The resulting pattern of spots was compared to that of standard proteins. We have identified several surface proteins, not apparently associated with the phagocytic process, which are present either in a multichain structure or in several discretely charged forms. After phagocytosis, we have observed the appearance of two proteins of 45 and 50 kdaltons in nonreducing gels. In addition, we have noted the disappearance of a 140-kdalton protein in gels run under reducing conditions. These alterations would not be detected in the conventional one-dimensional gel electrophoresis. This evidence shows that phagocytosis leads to a modification of cell surface protein composition. Our results support the concept of specific enrichment and depletion of membrane components during antibody-dependent phagocytosis.

  20. Identification and Characterization of Ixodes scapularis Antigens That Elicit Tick Immunity Using Yeast Surface Display

    NARCIS (Netherlands)

    Schuijt, T.J.; Narasimhan, S.; Daffre, S.; Deponte, K.; Hovius, J.W.R.; van 't Veer, C.; van der Poll, T.; Bakhtiari, K.; Meijers, J.C.M.; Boder, E.T.; van Dam, A.P.; Fikrig, E.

    2011-01-01

    Repeated exposure of rabbits and other animals to ticks results in acquired resistance or immunity to subsequent tick bites and is partially elicited by antibodies directed against tick antigens. In this study we demonstrate the utility of a yeast surface display approach to identify tick salivary

  1. Curve identification for high friction surface treatment (HFST) installation recommendation : final report.

    Science.gov (United States)

    2016-09-01

    The objectives of this study are to develop and deploy a means for cost-effectively extracting curve information using the widely available GPS and GIS data to support high friction surface treatment (HFST) installation recommendations (i.e., start a...

  2. Nanobubble assisted nanopatterning utilized for ex situ identification of surface nanobubbles

    Czech Academy of Sciences Publication Activity Database

    Tarábková, Hana; Janda, Pavel

    2013-01-01

    Roč. 25, č. 18 (2013), s. 184001 ISSN 0953-8984 R&D Projects: GA ČR(CZ) GAP208/12/2429 Institutional support: RVO:61388955 Keywords : water * latex particles * hydrophobic surfaces Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.223, year: 2013

  3. Identification of Uranyl Surface Complexes an Ferrihydrite: Advanced EXAFS Data Analysis and CD-MUSIC Modeling

    NARCIS (Netherlands)

    Rossberg, A.; Ulrich, K.U.; Weiss, S.; Tsushima, S.; Hiemstra, T.; Scheinost, A.C.

    2009-01-01

    Previous spectroscopic research suggested that uranium(VI) adsorption to iron oxides is dominated by ternary uranyl-carbonato surface complexes across an unexpectedly wide pH range. Formation of such complexes would have a significant impact on the sorption behavior and mobility of uranium in

  4. A new technique for the identification of surface contamination in low temperature bolometric experiments

    International Nuclear Information System (INIS)

    Sangiorgio, S.; Arnaboldi, C.; Brofferio, C.; Bucci, C.; Capelli, S.; Carbone, L.; Clemenza, M.; Cremonesi, O.; Fiorini, E.; Foggetta, L.; Giuliani, A.; Gorla, P.; Nones, C.; Nucciotti, A.; Pavan, M.; Pedretti, M.; Pessina, G.; Pirro, S.; Previtali, E.; Salvioni, C.

    2011-01-01

    In the framework of the bolometric experiment CUORE, a new and promising technique has been developed in order to control the dangerous contamination coming from the surfaces close to the detector. In fact, by means of a composite bolometer, it is possible to partially overcome the loss of spatial resolution of the bolometer itself and to clearly identify events coming from outside.

  5. Can a semi-automated surface matching and principal axis-based algorithm accurately quantify femoral shaft fracture alignment in six degrees of freedom?

    Science.gov (United States)

    Crookshank, Meghan C; Beek, Maarten; Singh, Devin; Schemitsch, Emil H; Whyne, Cari M

    2013-07-01

    Accurate alignment of femoral shaft fractures treated with intramedullary nailing remains a challenge for orthopaedic surgeons. The aim of this study is to develop and validate a cone-beam CT-based, semi-automated algorithm to quantify the malalignment in six degrees of freedom (6DOF) using a surface matching and principal axes-based approach. Complex comminuted diaphyseal fractures were created in nine cadaveric femora and cone-beam CT images were acquired (27 cases total). Scans were cropped and segmented using intensity-based thresholding, producing superior, inferior and comminution volumes. Cylinders were fit to estimate the long axes of the superior and inferior fragments. The angle and distance between the two cylindrical axes were calculated to determine flexion/extension and varus/valgus angulation and medial/lateral and anterior/posterior translations, respectively. Both surfaces were unwrapped about the cylindrical axes. Three methods of matching the unwrapped surface for determination of periaxial rotation were compared based on minimizing the distance between features. The calculated corrections were compared to the input malalignment conditions. All 6DOF were calculated to within current clinical tolerances for all but two cases. This algorithm yielded accurate quantification of malalignment of femoral shaft fractures for fracture gaps up to 60 mm, based on a single CBCT image of the fractured limb. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Improved Determination of Surface and Atmospheric Temperatures Using Only Shortwave AIRS Channels: The AIRS Version 6 Retrieval Algorithm

    Science.gov (United States)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2010-01-01

    AIRS was launched on EOS Aqua on May 4, 2002 together with ASMU-A and HSB to form a next generation polar orbiting infrared and microwave atmosphere sounding system (Pagano et al 2003). The theoretical approach used to analyze AIRS/AMSU/HSB data in the presence of clouds in the AIRS Science Team Version 3 at-launch algorithm, and that used in the Version 4 post-launch algorithm, have been published previously. Significant theoretical and practical improvements have been made in the analysis of AIRS/AMSU data since the Version 4 algorithm. Most of these have already been incorporated in the AIRS Science Team Version 5 algorithm (Susskind et al 2010), now being used operationally at the Goddard DISC. The AIRS Version 5 retrieval algorithm contains three significant improvements over Version 4. Improved physics in Version 5 allowed for use of AIRS clear column radiances (R(sub i)) in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations were used primarily in the generation of clear column radiances (R(sub i)) for all channels. This new approach allowed for the generation of accurate Quality Controlled values of R(sub i) and T(p) under more stressing cloud conditions. Secondly, Version 5 contained a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 contained for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Susskind et al 2010 shows that Version 5 AIRS Only sounding are only slightly degraded from the AIRS/AMSU soundings, even at large fractional cloud

  7. Machine learning algorithms based on signals from a single wearable inertial sensor can detect surface- and age-related differences in walking.

    Science.gov (United States)

    Hu, B; Dixon, P C; Jacobs, J V; Dennerlein, J T; Schiffman, J M

    2018-04-11

    The aim of this study was to investigate if a machine learning algorithm utilizing triaxial accelerometer, gyroscope, and magnetometer data from an inertial motion unit (IMU) could detect surface- and age-related differences in walking. Seventeen older (71.5 ± 4.2 years) and eighteen young (27.0 ± 4.7 years) healthy adults walked over flat and uneven brick surfaces wearing an inertial measurement unit (IMU) over the L5 vertebra. IMU data were binned into smaller data segments using 4-s sliding windows with 1-s step lengths. Ninety percent of the data were used as training inputs and the remaining ten percent were saved for testing. A deep learning network with long short-term memory units was used for training (fully supervised), prediction, and implementation. Four models were trained using the following inputs: all nine channels from every sensor in the IMU (fully trained model), accelerometer signals alone, gyroscope signals alone, and magnetometer signals alone. The fully trained models for surface and age outperformed all other models (area under the receiver operator curve, AUC = 0.97 and 0.96, respectively; p ≤ .045). The fully trained models for surface and age had high accuracy (96.3, 94.7%), precision (96.4, 95.2%), recall (96.3, 94.7%), and f1-score (96.3, 94.6%). These results demonstrate that processing the signals of a single IMU device with machine-learning algorithms enables the detection of surface conditions and age-group status from an individual's walking behavior which, with further learning, may be utilized to facilitate identifying and intervening on fall risk. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Identification of the c(10×6)-CN/Cu(001) surface structure

    KAUST Repository

    Shuttleworth, I.G.

    2014-12-01

    © 2014 Elsevier B.V. All rights reserved. A systematic survey of all possible c(10 x 6)-CN/Cu(0 0 1) structures has been performed using density functional theory (DFT). A group of four preferred structures is presented with one of the structures identified as optimal. An analysis of the bonding within the optimal structure has shown that a significant localisation of the surface Cu 4s bonds occurs in the saturated system.

  9. Identification of polymer surface adsorbed proteins implicated in pluripotent human embryonic stem cell expansion.

    Science.gov (United States)

    Hammad, Moamen; Rao, Wei; Smith, James G W; Anderson, Daniel G; Langer, Robert; Young, Lorraine E; Barrett, David A; Davies, Martyn C; Denning, Chris; Alexander, Morgan R

    2016-08-16

    Improved biomaterials are required for application in regenerative medicine, biosensing, and as medical devices. The response of cells to the chemistry of polymers cultured in media is generally regarded as being dominated by proteins adsorbed to the surface. Here we use mass spectrometry to identify proteins adsorbed from a complex mouse embryonic fibroblast (MEF) conditioned medium found to support pluripotent human embryonic stem cell (hESC) expansion on a plasma etched tissue culture polystyrene surface. A total of 71 proteins were identified, of which 14 uniquely correlated with the surface on which pluripotent stem cell expansion was achieved. We have developed a microarray combinatorial protein spotting approach to test the potential of these 14 proteins to support expansion of a hESC cell line (HUES-7) and a human induced pluripotent stem cell line (ReBl-PAT) on a novel polymer (N-(4-Hydroxyphenyl) methacrylamide). These proteins were spotted to form a primary array yielding several protein mixture 'hits' that enhanced cell attachment to the polymer. A second array was generated to test the function of a refined set of protein mixtures. We found that a combination of heat shock protein 90 and heat shock protein-1 encourage elevated adherence of pluripotent stem cells at a level comparable to fibronectin pre-treatment.

  10. Identification of Hydraulic Fracture Orientation from Ground Surface Using the Seismic Moment Tensor

    Directory of Open Access Journals (Sweden)

    E.V. Birialtcev

    2017-09-01

    Full Text Available Microseismic monitoring from ground surface is applied in the development of hard-to-recover reserves, especially in the process of hydraulic fracturing (HF. This paper compares several methods of HF microseismic monitoring from the surface, including diffraction stacking, time reverse modeling, and spectral methods. In (Aki and Richards, 1980 it is shown that signal enhancement from seismic events under correlated noises significantly improves when applying the maximum likelihood method. The maximum likelihood method allows to exclude influence of the correlated noise, and also to estimate the seismic moment tensor from ground surface. Estimation of the seismic moment tensor allows to detect type and orientation of source. Usually, the following source types are identified: “Explosion Point” (EXP, “Tensile Crack” (TC, “Double-Couple” (DC and “Compensated Linear Vector Dipole” (CLVD. The orientation of the hydraulic fracture can be estimated even when there is no obvious asymmetry of the spatial distribution of the cloud of events. The features of full-wave location technology are presented. The paper also reviews an example of microseismic monitoring of hydraulic fracturing when there is no obvious asymmetry of microseismic activity cloud, but due to the estimation of the seismic moment tensor it becomes possible to identify with confidence the dominant direction of the fracture.

  11. Identification of surface proteins of Trichinella spiralis muscle larvae using immunoproteomics.

    Science.gov (United States)

    Liu, R D; Cui, J; Wang, L; Long, S R; Zhang, X; Liu, M Y; Wang, Z Q

    2014-12-01

    Trichinella spiralis surface proteins are directly exposed to the host's immune system, making them the main target antigens which induce the immune responses and may play an important role in the larval invasion and development process. The analysis and characterization of T. spiralis surface proteins could provide useful information to elucidate the host-parasite interaction, identify the early diagnostic antigens and the targets for vaccine. The purpose of this study was to identify the surface proteins of T. spiralis muscle larvae by two-dimensional gel electrophoresis (2-DE) Western-blot analysis and mass spectrometry. The 2-DE results showed that a total of approximately 33 proteins spots were detected with molecular weights varying from 10 to 66 kDa and isoelectric point (pI) from 4 to 7. Fourteen protein spots were recognized by sera of mice infected with T. spiralis at 42 dpi or at 18 dpi, and 12 spots were successfully identified by MALDI-TOF/TOF-MS, which represented 8 different proteins of T. spiralis. Out of the 8 T. spiralis proteins, 5 proteins (partial P49 antigen, deoxyribonuclease II family protein, two serine proteases, and serine proteinase) had catalytic and hydrolase activity, which might be the invasion-related proteins and the targets for vaccine. The 4 proteins (deoxyribonuclease II family protein, serine protease, 53 kDa ES antigen and hypothetical protein Tsp_08444) recognized by infection sera at 18 dpi might be the early diagnostic antigens for trichinellosis.

  12. Accuracy and Reliability of Uterine Contraction Identification Using Abdominal Surface Electrodes

    Directory of Open Access Journals (Sweden)

    Barrie Hayes-Gill

    2012-01-01

    Full Text Available Objective To compare the accuracy and reliability of uterine contraction identification from maternal abdominal electrohysterogram and tocodynamometer with an intrauterine pressure transducer. Methods Seventy-four term parturients had uterine contractions monitored simultaneously with electrohysterography, tocodynamometry, and intrauterine pressure measurement. Results Electrohysterography was more reliable than tocodynamometry when compared to the intrauterine method (97.1 versus 60.9 positive percent agreement; P < 0.001. The root mean square error was lower for electrohysterography than tocodynamometry in the first stage (0.88 versus 1.22 contractions/10 minutes; P < 0.001, and equivalent to tocodynamometry in the second. The positive predictive values for tocodynamometry and electrohysterography (84.1% versus 78.7% were not significantly different, nor were the false positive rates (21.3% versus 15.9%; P = 0.052. The sensitivity of electrohysterography was superior to that of tocodynamometry (86.0 versus 73.6%; P < 0.001. Conclusion The electrohysterographic technique was more reliable and similar in accuracy to tocodynamometry in detecting intrapartum uterine contractions.

  13. Identification of novel surface-exposed proteins of Rickettsia rickettsii by affinity purification and proteomics.

    Directory of Open Access Journals (Sweden)

    Wenping Gong

    Full Text Available Rickettsia rickettsii, the causative agent of Rocky Mountain spotted fever, is the most pathogenic member among Rickettsia spp. Surface-exposed proteins (SEPs of R. rickettsii may play important roles in its pathogenesis or immunity. In this study, R. rickettsii organisms were surface-labeled with sulfo-NHS-SS-biotin and the labeled proteins were affinity-purified with streptavidin. The isolated proteins were separated by two-dimensional electrophoresis, and 10 proteins were identified among 23 protein spots by electrospray ionization tandem mass spectrometry. Five (OmpA, OmpB, GroEL, GroES, and a DNA-binding protein of the 10 proteins were previously characterized as surface proteins of R. rickettsii. Another 5 proteins (Adr1, Adr2, OmpW, Porin_4, and TolC were first recognized as SEPs of R. rickettsii herein. The genes encoding the 5 novel SEPs were expressed in Escherichia coli cells, resulting in 5 recombinant SEPs (rSEPs, which were used to immunize mice. After challenge with viable R. rickettsii cells, the rickettsial load in the spleen, liver, or lung of mice immunized with rAdr2 and in the lungs of mice immunized with other rSEPs excluding rTolC was significantly lower than in mice that were mock-immunized with PBS. The in vitro neutralization test revealed that sera from mice immunized with rAdr1, rAdr2, or rOmpW reduced R. rickettsii adherence to and invasion of vascular endothelial cells. The immuno-electron microscopic assay clearly showed that the novel SEPs were located in the outer and/or inner membrane of R. rickettsii. Altogether, the 5 novel SEPs identified herein might be involved in the interaction of R. rickettsii with vascular endothelial cells, and all of them except TolC were protective antigens.

  14. Identification of cell surface targets for HIV-1 therapeutics using genetic screens

    International Nuclear Information System (INIS)

    Dunn, Stephen J.; Khan, Imran H.; Chan, Ursula A.; Scearce, Robin L.; Melara, Claudia L.; Paul, Amber M.; Sharma, Vikram; Bih, Fong-Yih; Holzmayer, Tanya A.; Luciw, Paul A.; Abo, Arie

    2004-01-01

    Human immunodeficiency virus (HIV) drugs designed to interfere with obligatory utilization of certain host cell factors by virus are less likely to encounter development of resistant strains than drugs directed against viral components. Several cellular genes required for productive infection by HIV were identified by the use of genetic suppressor element (GSE) technology as potential targets for anti-HIV drug development. Fragmented cDNA libraries from various pools of human peripheral blood mononuclear cells (PBMC) were expressed in vitro in human immunodeficiency virus type 1 (HIV-1)-susceptible cell lines and subjected to genetic screens to identify GSEs that interfered with viral replication. After three rounds of selection, more than 15 000 GSEs were sequenced, and the cognate genes were identified. The GSEs that inhibited the virus were derived from a diverse set of genes including cell surface receptors, cytokines, signaling proteins, transcription factors, as well as genes with unknown function. Approximately 2.5% of the identified genes were previously shown to play a role in the HIV-1 life cycle; this finding supports the biological relevance of the assay. GSEs were derived from the following 12 cell surface proteins: CXCR4, CCR4, CCR7, CD11C, CD44, CD47, CD68, CD69, CD74, CSF3R, GABBR1, and TNFR2. Requirement of some of these genes for viral infection was also investigated by using RNA interference (RNAi) technology; accordingly, 10 genes were implicated in early events of the viral life cycle, before viral DNA synthesis. Thus, these cell surface proteins represent novel targets for the development of therapeutics against HIV-1 infection and AIDS

  15. Rapid identification of bacterial resistance to Ciprofloxacin using surface-enhanced Raman spectroscopy

    Science.gov (United States)

    Kastanos, Evdokia; Hadjigeorgiou, Katerina; Pitris, Costas

    2014-02-01

    Due to its effectiveness and broad coverage, Ciprofloxacin is the fifth most prescribed antibiotic in the US. As current methods of infection diagnosis and antibiotic sensitivity testing (i.e. an antibiogram) are very time consuming, physicians prescribe ciprofloxacin before obtaining antibiogram results. In order to avoid increasing resistance to the antibiotic, a method was developed to provide both a rapid diagnosis and the sensitivity to the antibiotic. Using Surface Enhanced Raman Spectroscopy, an antibiogram was obtained after exposing the bacteria to Ciprofloxacin for just two hours. Spectral analysis revealed clear separation between sensitive and resistant bacteria and could also offer some inside into the mechanisms of resistance.

  16. Sea Surface Salinity and Wind Retrieval Algorithm Using Combined Passive-Active L-Band Microwave Data

    Science.gov (United States)

    Yueh, Simon H.; Chaubell, Mario J.

    2011-01-01

    Aquarius is a combined passive/active L-band microwave instrument developed to map the salinity field at the surface of the ocean from space. The data will support studies of the coupling between ocean circulation, the global water cycle, and climate. The primary science objective of this mission is to monitor the seasonal and interannual variation of the large scale features of the surface salinity field in the open ocean with a spatial resolution of 150 kilometers and a retrieval accuracy of 0.2 practical salinity units globally on a monthly basis. The measurement principle is based on the response of the L-band (1.413 gigahertz) sea surface brightness temperatures (T (sub B)) to sea surface salinity. To achieve the required 0.2 practical salinity units accuracy, the impact of sea surface roughness (e.g. wind-generated ripples and waves) along with several factors on the observed brightness temperature has to be corrected to better than a few tenths of a degree Kelvin. To the end, Aquarius includes a scatterometer to help correct for this surface roughness effect.

  17. Closed Loop Subspace Identification

    Directory of Open Access Journals (Sweden)

    Geir W. Nilsen

    2005-07-01

    Full Text Available A new three step closed loop subspace identifications algorithm based on an already existing algorithm and the Kalman filter properties is presented. The Kalman filter contains noise free states which implies that the states and innovation are uneorre lated. The idea is that a Kalman filter found by a good subspace identification algorithm will give an output which is sufficiently uncorrelated with the noise on the output of the actual process. Using feedback from the output of the estimated Kalman filter in the closed loop system a subspace identification algorithm can be used to estimate an unbiased model.

  18. Isolation and Identification of Actinobacteria from Surface-Sterilized Wheat Roots

    Science.gov (United States)

    Coombs, Justin T.; Franco, Christopher M. M.

    2003-01-01

    This is the first report of filamentous actinobacteria isolated from surface-sterilized root tissues of healthy wheat plants (Triticum aestivum L.). Wheat roots from a range of sites across South Australia were used as the source material for the isolation of the endophytic actinobacteria. Roots were surface-sterilized by using ethanol and sodium hypochlorite prior to the isolation of the actinobacteria. Forty-nine of these isolates were identified by using 16S ribosomal DNA (rDNA) sequencing and found to belong to a small group of actinobacterial genera including Streptomyces, Microbispora, Micromonospora, and Nocardiodes spp. Many of the Streptomyces spp. were found to be similar, on the basis of their 16S rDNA gene sequence, to Streptomyces spp. that had been isolated from potato scabs. In particular, several isolates exhibited high 16S rDNA gene sequence homology to Streptomyces caviscabies and S. setonii. None of these isolates, nor the S. caviscabies and S. setonii type strains, were found to carry the nec1 pathogenicity-associated gene or to produce the toxin thaxtomin, indicating that they were nonpathogenic. These isolates were recovered from healthy plants over a range of geographically and temporally isolated sampling events and constitute an important plant-microbe interaction. PMID:12957950

  19. Identification and characterization of Ixodes scapularis antigens that elicit tick immunity using yeast surface display.

    Directory of Open Access Journals (Sweden)

    Tim J Schuijt

    2011-01-01

    Full Text Available Repeated exposure of rabbits and other animals to ticks results in acquired resistance or immunity to subsequent tick bites and is partially elicited by antibodies directed against tick antigens. In this study we demonstrate the utility of a yeast surface display approach to identify tick salivary antigens that react with tick-immune serum. We constructed an Ixodes scapularis nymphal salivary gland yeast surface display library and screened the library with nymph-immune rabbit sera and identified five salivary antigens. Four of these proteins, designated P8, P19, P23 and P32, had a predicted signal sequence. We generated recombinant (r P8, P19 and P23 in a Drosophila expression system for functional and immunization studies. rP8 showed anti-complement activity and rP23 demonstrated anti-coagulant activity. Ixodes scapularis feeding was significantly impaired when nymphs were fed on rabbits immunized with a cocktail of rP8, rP19 and rP23, a hall mark of tick-immunity. These studies also suggest that these antigens may serve as potential vaccine candidates to thwart tick feeding.

  20. Identification of tectonic deformations on the south polar surface of the moon

    Science.gov (United States)

    Mukherjee, Saumitra; Singh, Priyadarshini

    2015-07-01

    Recent extensional and contractional tectonic features present globally over the lunar surface have been studied to infer lunar crustal tectonism. Investigation of indicators of recent crustal tectonics, such as fault lines, thrust fault scarps, and dislocation of debris along the identified fault planes, primarily using data from the miniature-synthetic aperture radar (mini-SAR) aboard CHANDRAYAAN-1 mission and Narrow angle camera (NAC) images, are the focus of this study. Spatial orientation of these tectonic features helps to elucidate the change in the interior geological dynamics of any planetary body with time. The ability of microwave sensors to penetrate the lunar regolith, along with application of m-χ decomposition method on Mini-SAR data has been used to reveal unique features indicative of hidden tectonics. The m-χ decomposition derived radar images expose hidden lineaments and lobate scarps present within shadowed crater floors as well as over the illuminated regions of the lunar surface. The area around and within Cabeus B crater in the South Polar Region contains lobate scarps, hidden lineaments and debris avalanches (associated with the identified lineaments) indicative of relatively recent crustal tectonism.

  1. Mapping the surface of MNKr2 and CopZ - identification of residues critical for metallotransfer

    International Nuclear Information System (INIS)

    Jones, C.E.; Cobine, P.A.; Dameron, C.T.

    2001-01-01

    Full text: Cells utilise a network of proteins that include CPx-type ATPases and metallochaperones to balance intracellular copper concentration. The Menkes ATPase has six N-terminal domains which bind Cu(I) and are critical for ATPase function. The NMR solution structure of the second domain (MNKr2) shows that the structure adopts an 'open-faced β-sandwich' fold, in which two α-helices lie over a single four stranded β-sheet. The global fold is identical to the bacterial copper chaperone CopZ MNKr2 is unable to substitute for CopZ in copper transfer to the cop operon represser, CopY. To investigate how structure affects function we have analysed the surface features of MNKr2 and CopZ Despite having the same global fold, MNKr2 and CopZ have contrasting electrostatic surfaces, which may partially explain the inability of MNKr2 to transfer copper to CopY

  2. Identification of nonlinear coupling in wave turbulence at the surface of water

    Science.gov (United States)

    Campagne, Antoine; Hassaini, Roumaissa; Redor, Ivan; Aubourg, Quentin; Sommeria, Joël; Mordant, Nicolas

    2017-11-01

    The Weak Turbulence Theory is a theory, in the limit of vanishing nonlinearity, that derive analytically statistical features of wave turbulence. The stationary spectrum for the surface elevation in the case of gravity waves, is predicted to E(k) k - 5 / 2 . This spectral exponent -5/2 remains elusive in all experiments. in which the measured exponent is systematically lower than the prediction. Furthermore in the experiments the weaker the nonlinearity the further the spectral exponent is from the prediction. In order to investigate the reason for this observation we developed an experiment in the CORIOLIS facility in Grenoble. It is a 13m-diameter circular pool filled with water with a 70 cm depth. We generate wave turbulence by using two wedge wavemakers. Surface elevation measurements are performed by a stereoscopic optical technique and by capacitive probes. The nonlinear coupling at work in this system are analyzed by computing 3- and 4-wave correlations of the Fourier wave amplitudes in frequency. Theory predicts that coupling should occur through 4-wave resonant interaction. In our data, strong 3-wave correlations are observed in addition to the 4-wave correlation. Most our observations are consistent with field observation in the Black Sea (Leckler et al. 2015). This project has received funding from the European Research Council (ERC, Grant Agreement No 647018-WATU).

  3. Asteroid surface archaeology: Identification of eroded impact structures by spectral properties on (4) Vesta

    Science.gov (United States)

    Hoffmann, M.; Nathues, A.; Schäfer, M.; Schmedemann, N.; Vincent, J.; Russell, C.

    2014-07-01

    Introduction: Vesta's surface material is characterized as a deep regolith [1,2], mobilized by countless impacts. The almost catastrophic impact near Vesta's south pole, which has created the Rheasilvia basin, and the partly overlapping older impact of similar size, Veneneia, have not only reshaped the areas of their interior (roughly 50 % of the Vesta surface), but also emplaced each time a huge ejecta blanket of similar size, thus covering the whole remaining surface. In this context, pristine and even younger morphologic features have been erased. However, the spectral signatures of the early differentiation and alteration products by impacts have partially remained in situ. While near the north pole several large old eroded impact features are visible, the equatorial zone close to the basin rims seems to be void of those. Since it is unlikely, that this zone has been entirely avoided by large projectiles, in this area the results of such impacts may have left morphologically not detectable remnants: Individual distribution of particle sizes and altered photometric properties, excavated layers, shock metamorphism, melt generation inside particles and on macroscopic scales, and emplacement of exogenous projectile material. An analysis by color ratio images and spatial profiles of diagnostic spectral parameters reveals such features. Results: Based on local spectroscopic evidence we have detected eroded impact features of three categories: 1) Small craters with diameters of a few kilometers, 2) Large craters or, if even larger, incipient impact basins, 3) Sub-global ejecta blankets. The eastern part of Feralia Planitia, diameter 140 km, has little evidence of a round outline in the shape model, but it features spectral gradients towards its center. A feature of similar size, centered north of Lucaria Tholus becomes only visible by a similar spectra gradient and a circular outline in specific spectral ratio mosaics. These features seem to be related to the

  4. Multi-objective parametric optimization of Inertance type pulse tube refrigerator using response surface methodology and non-dominated sorting genetic algorithm

    Science.gov (United States)

    Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.

    2014-07-01

    The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.

  5. The hypothesis of formation of the structure of surfaced metal at the surfacing based on the application of the prognostic algorithm of control the electrode wire speed

    Directory of Open Access Journals (Sweden)

    Lebedev V. A.

    2017-12-01

    Full Text Available The growth of a drop in the process of surfacing by a consumable electrode is characterized by a linear dependence of the current change on time. A hypothesis has been put forward, according to which a reduction in the feed rate of the electrode wire to zero in this time interval will substantially reduce the spraying loss and improve the formation of the surfacing roller. For the implementation of which, the use of regulators with a typical law of regulation is proposed, but not according to the current value of the arc current, but according to the forecast. A key feature of these researches is a realization given surfacing process with the imposition of external mechanical oscillations with specified amplitude-frequency characteristics on the welding bath. Analytical calculation of the transfer function for the prognostic PID regulator with the simplest linear prediction taking into account the oscillation of the weld pool is given.

  6. A comparative analysis of DBSCAN, K-means, and quadratic variation algorithms for automatic identification of swallows from swallowing accelerometry signals.

    Science.gov (United States)

    Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin

    2015-04-01

    Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Spatial Variation, Pollution Assessment and Source Identification of Major Nutrients in Surface Sediments of Nansi Lake, China

    Directory of Open Access Journals (Sweden)

    Longfeng Wang

    2017-06-01

    Full Text Available Nansi Lake has been seriously affected by intensive anthropogenic activities in recent years. In this study, an extensive survey on spatial variation, pollution assessment as well as the possible sources identification of major nutrients (Total phosphorus: TP, Total nitrogen: TN, and Total organic carbon: TOC in the surface sediments of Nansi Lake was conducted. Results showed that the mean contents of TP, TN and TOC were 1.13-, 5.40- and 2.50- fold higher than their background values respectively. Most of the TN and TOC contents in the surface sediments of Nansi Lake were four times as high or higher and twice as high or higher than the background values except the Zhaoyang sub-lake, and the spatial distribution of TN and TOC contents were remarkably similar over a large area. Nearly all the TP contents in the surface sediments of Nansi Lake were all higher than its background values except most part of the Zhaoyang sub-lake. Based on the enrichment factor (EF and the organic pollution evaluation index (Org-index, TP, TOC and TN showed minor enrichment (1.13, minor enrichment (2.50 and moderately severe enrichment (5.40, respectively, and most part of the Dushan sub-lake and the vicinity of the Weishan island were in moderate or heavy sediments organic pollution, while the other parts were clean. Moreover, according to the results of multivariate statistical analysis, we deduced that anthropogenic TN and TOC were mainly came from industrial sources including enterprises distributed in Jining, Yanzhou and Zoucheng along with iron and steel industries distributed in the southern of the Weishan sub-lake, whereas TP mainly originated from runoff and soil erosion coming from agricultural lands located in Heze city and Weishan island, the local aquacultural activities as well as the domestic sewage discharge of Jining city.

  8. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  9. Generalized Split-Window Algorithm for Estimate of Land Surface Temperature from Chinese Geostationary FengYun Meteorological Satellite (FY-2C Data

    Directory of Open Access Journals (Sweden)

    Jun Xia

    2008-02-01

    Full Text Available On the basis of the radiative transfer theory, this paper addressed the estimate ofLand Surface Temperature (LST from the Chinese first operational geostationarymeteorological satellite-FengYun-2C (FY-2C data in two thermal infrared channels (IR1,10.3-11.3 μ m and IR2, 11.5-12.5 μ m , using the Generalized Split-Window (GSWalgorithm proposed by Wan and Dozier (1996. The coefficients in the GSW algorithmcorresponding to a series of overlapping ranging of the mean emissivity, the atmosphericWater Vapor Content (WVC, and the LST were derived using a statistical regressionmethod from the numerical values simulated with an accurate atmospheric radiativetransfer model MODTRAN 4 over a wide range of atmospheric and surface conditions.The simulation analysis showed that the LST could be estimated by the GSW algorithmwith the Root Mean Square Error (RMSE less than 1 K for the sub-ranges with theViewing Zenith Angle (VZA less than 30° or for the sub-rangs with VZA less than 60°and the atmospheric WVC less than 3.5 g/cm2 provided that the Land Surface Emissivities(LSEs are known. In order to determine the range for the optimum coefficients of theGSW algorithm, the LSEs could be derived from the data in MODIS channels 31 and 32 provided by MODIS/Terra LST product MOD11B1, or be estimated either according tothe land surface classification or using the method proposed by Jiang et al. (2006; and theWVC could be obtained from MODIS total precipitable water product MOD05, or beretrieved using Li et al.’ method (2003. The sensitivity and error analyses in term of theuncertainty of the LSE and WVC as well as the instrumental noise were performed. Inaddition, in order to compare the different formulations of the split-window algorithms,several recently proposed split-window algorithms were used to estimate the LST with thesame simulated FY-2C data. The result of the intercomparsion showed that most of thealgorithms give

  10. Research on identification and determination of mixed pesticides in apples using surface enhanced Raman spectroscopy

    Science.gov (United States)

    Zhai, Chen; Li, Yongyu; Peng, Yankun; Xu, Tianfeng; Dhakal, Sagar; Chao, Kuanglin; Qin, Jianwei

    2015-05-01

    Residual pesticides in fruits and vegetables have become one of the major food safety concerns around the world. At present, routine analytical methods used for the determination of pesticide residue on the surface of fruits and vegetables are destructive, complex, time-consuming, high cost and not environmentally friendly. In this study, a novel Surface Enhanced Raman Spectroscopy (SERS) method with silver colloid was developed for fast and sensitive nondestructive detection of residual pesticides in fruits and vegetables by using a self-developed Raman system. SERS technology is a combination of Raman spectroscopy and nanotechnology. SERS can greatly enhance the Raman signal intensity, achieve single-molecule detection, and has a simple sample pre-treatment characteristic of high sensitivity and no damage; in recent years it has begun to be used in food safety testing research. In this study a rapid and sensitive method was developed to identify and analyze mixed pesticides of chlorpyrifos, deltamethrin and acetamiprid in apple samples by SERS. Silver colloid was used for SERS measurement by hydroxylamine hydrochloride reduced. The advantages of this method are seen in its fast preparation at room temperature, good reproducibility and immediate applicability. Raman spectrum is highly interfered by noise signals and fluorescence background, which make it too complex to get good result. In this study the noise signals and fluorescence background were removed by Savitzky-Golay filter and min-max signal adaptive zooming method. Under optimal conditions, pesticide residues in apple samples can be detected by SERS at 0.005 μg/cm2 and 0.002 μg/cm2 for individual acetamiprid and thiram, respectively. When mixing the two pesticides at low concentrations, their characteristic peaks can still be identified from the SERS spectrum of the mixture. Based on the synthesized material and its application in SERS operation, the method represents an ultrasensitive SERS performance

  11. Identification of hotspots and trends of fecal surface water pollution in developing countries

    Science.gov (United States)

    Reder, Klara; Flörke, Martina; Alcamo, Joseph

    2015-04-01

    Water is the essential resource ensuring human life on earth, which can only prosper when water is available and accessible. But of importance is not only the quantity of accessible water but also its quality, which in case of pollution may pose a risk to human health. The pollutants which pose a risk to human health are manifold, covering several groups such as pathogens, nutrients, human pharmaceuticals, heavy metals, and others. With regards to human health, pathogen contamination is of major interest as 4% of all death and 5.7% of disability or ill health in the world can be attributed to poor water supply, sanitation and personal and domestic hygiene. In developing countries, 2.6 billion people lacked access to improved sanitation in 2011. The lack of sanitation poses a risk to surface water pollution which is a threat to human health. A typical indicator for pathogen pollution is fecal coliform bacteria. The objective our study is to assess fecal pollution in the developing regions Africa, Asia and Latin America using the large-scale water quality model WorldQual. Model runs were carried-out to calculate in-stream concentrations and the respective loadings reaching rivers for the time period 1990 to 2010. We identified hotspots of fecal coliform loadings and in-stream concentrations which were further analyzed and ranked in terms of fecal surface water pollution. Main findings are that loadings mainly originate from the domestic sector, thus loadings are high in highly populated areas. In general, domestic loadings can be attributed to the two subsectors domestic sewered and domestic non sewered. The spatial distribution of both sectors varies across catchments. Hotspot pattern of in-stream concentrations are similar to the loadings pattern although they are different in seasonality. As the dilution varies with climate its dilution capacity is high during seasons with high precipitation, which in turn decreases the in-stream concentrations. The fecal

  12. The evaluation of an identification algorithm for Mycobacterium species using the 16S rRNA coding gene and rpoB

    Directory of Open Access Journals (Sweden)

    Yuko Kazumi

    2012-01-01

    Conclusions: The 16S rRNA gene identification is a rapid and prevalent method but still has some limitations. Therefore, the stepwise combination of rpoB with 16S rRNA gene analysis is an effective system for the identification of Mycobacterium species.

  13. A wetting and drying algorithm with a combined pressure/free-surface formulation for non-hydrostatic models

    Science.gov (United States)

    Funke, S. W.; Pain, C. C.; Kramer, S. C.; Piggott, M. D.

    2011-11-01

    A wetting and drying method for free-surface problems for the three-dimensional, non-hydrostatic Navier-Stokes equations is proposed. The key idea is to use a horizontally fixed mesh and to apply different boundary conditions on the free-surface in wet and dry zones. In wet areas a combined pressure/free-surface kinematic boundary condition is applied, while in dry areas a positive water level and a no-normal flow boundary condition are enforced. In addition, vertical mesh movement is performed to accurately represent the free-surface motion. Non-physical flow in the remaining thin layer in dry areas is naturally prevented if a Manning-Strickler bottom drag is used. The treatment of the wetting and drying processes applied through the boundary condition yields great flexibility to the discretisation used. Specifically, a fully unstructured mesh with any finite element choice and implicit time discretisation method can be applied. The resulting method is mass conservative, stable and accurate. It is implemented within Fluidity-ICOM [1] and verified against several idealized test cases and a laboratory experiment of the Okushiri tsunami.

  14. A fast, magnetics-free flux surface estimation and q-profile reconstruction algorithm for feedback control of plasma profiles

    NARCIS (Netherlands)

    Hommen, G.; de M. Baar,; Citrin, J.; de Blank, H. J.; Voorhoeve, R. J.; de Bock, M. F. M.; Steinbuch, M.

    2013-01-01

    The flux surfaces' layout and the magnetic winding number q are important quantities for the performance and stability of tokamak plasmas. Normally, these quantities are iteratively derived by solving the plasma equilibrium for the poloidal and toroidal flux. In this work, a fast, non-iterative

  15. Identification of asymptomatic type 2 diabetes mellitus patients with a low, intermediate and high risk of ischaemic heart disease: is there an algorithm?

    DEFF Research Database (Denmark)

    Poulsen, Mikael Kjær; Henriksen, Jan Erik; Vach, W

    2010-01-01

    index >32 ml/m2, left ventricular ejection fraction algorithm identified low (n=96), intermediate (n=65) and high risk groups (n=115), in which the prevalence of myocardial ischaemia was 15%,23% and 43%, respectively. Overall the algorithm reduced...... the number of patients referred to MPS from 305 to 144.However, the sensitivity and specificity of the algorithm was just 68% and 62%, respectively.CONCLUSIONS/INTERPRETATION: Our algorithm was able to stratify which patients had a low, intermediate or high risk of myocardial ischaemia based on MPS. However......, the algorithm had low sensitivity and specificity, combined with high cost and time requirements.Trial registration: clinicaltrials.gov NCT00298844 Funding: The study was funded by the Danish Cardio vascular Research Academy (DaCRA), The Danish Diabetes Association and The Danish Heart Foundation....

  16. Identification and characterization of the murine cell surface receptor for the urokinase-type plasminogen activator

    DEFF Research Database (Denmark)

    Solberg, H; Løber, D; Eriksen, J

    1992-01-01

    Cell-binding experiments have indicated that murine cells on their surface have specific binding sites for mouse urokinase-type plasminogen activator (u-PA). In contrast to the human system, chemical cross-linking studies with an iodinated ligand did not yield any covalent adducts in the murine...... system, but in ligand-blotting analysis, two mouse u-PA-binding proteins could be visualized. To confirm that these proteins are the murine counterpart of the human u-PA receptor (u-PAR), a peptide was derived from the murine cDNA clone assigned to represent the murine u-PAR due to cross......-blotting analysis. Binding of mouse u-PA to its receptor showed species specificity in ligand-blotting analysis, since mouse u-PA did not bind to human u-PAR and human u-PA did not bind to mouse u-PAR. The apparent M(r) of mouse u-PAR varied between different mouse cell lines and ranged over M(r) 45...

  17. Anticancer drugs in Portuguese surface waters - Estimation of concentrations and identification of potentially priority drugs.

    Science.gov (United States)

    Santos, Mónica S F; Franquet-Griell, Helena; Lacorte, Silvia; Madeira, Luis M; Alves, Arminda

    2017-10-01

    Anticancer drugs, used in chemotherapy, have emerged as new water contaminants due to their increasing consumption trends and poor elimination efficiency in conventional water treatment processes. As a result, anticancer drugs have been reported in surface and even drinking waters, posing the environment and human health at risk. However, the occurrence and distribution of anticancer drugs depend on the area studied and the hydrological dynamics, which determine the risk towards the environment. The main objective of the present study was to evaluate the risk of anticancer drugs in Portugal. This work includes an extensive analysis of the consumption trends of 171 anticancer drugs, sold or dispensed in Portugal between 2007 and 2015. The consumption data was processed aiming at the estimation of predicted environmental loads of anticancer drugs and 11 compounds were identified as potentially priority drugs based on an exposure-based approach (PEC b > 10 ng L -1 and/or PEC c > 1 ng L -1 ). In a national perspective, mycophenolic acid and mycophenolate mofetil are suspected to pose high risk to aquatic biota. Moderate and low risk was also associated to cyclophosphamide and bicalutamide exposition, respectively. Although no evidences of risk exist yet for the other anticancer drugs, concerns may be associated with long term effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. History Matching and Parameter Estimation of Surface Deformation Data for a CO2 Sequestration Field Project Using Ensemble-Based Algorithms

    Science.gov (United States)

    Tavakoli, Reza; Srinivasan, Sanjay; Wheeler, Mary

    2015-04-01

    The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement variations of ensemble Kalman filter and ensemble smoother algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels (fractures) within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and quantify the uncertainty in the movement of CO2 plume.

  19. History matching and parameter estimation of surface deformation data for a CO2 sequestration field project using ensemble-based algorithm

    Science.gov (United States)

    Ping, J.; Tavakoli, R.; Min, B.; Srinivasan, S.; Wheeler, M. F.

    2015-12-01

    Optimal management of subsurface processes requires the characterization of the uncertainty in reservoir description and reservoir performance prediction. The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement ensemble-based algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels/fractures within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and monitor the movement of CO2 plume.

  20. Rapid label-free identification of mixed bacterial infections by surface plasmon resonance

    Directory of Open Access Journals (Sweden)

    Fu Weiling

    2011-06-01

    Full Text Available Abstract Background Early detection of mixed aerobic-anaerobic infection has been a challenge in clinical practice due to the phenotypic changes in complex environments. Surface plasmon resonance (SPR biosensor is widely used to detect DNA-DNA interaction and offers a sensitive and label-free approach in DNA research. Methods In this study, we developed a single-stranded DNA (ssDNA amplification technique and modified the traditional SPR detection system for rapid and simultaneous detection of mixed infections of four pathogenic microorganisms (Pseudomonas aeruginosa, Staphylococcus aureus, Clostridium tetani and Clostridium perfringens. Results We constructed the circulation detection well to increase the sensitivity and the tandem probe arrays to reduce the non-specific hybridization. The use of 16S rDNA universal primers ensured the amplification of four target nucleic acid sequences simultaneously, and further electrophoresis and sequencing confirmed the high efficiency of this amplification method. No significant signals were detected during the single-base mismatch or non-specific probe hybridization (P 2 values of >0.99. The lowest detection limits were 0.03 nM for P. aeruginosa, 0.02 nM for S. aureus, 0.01 nM for C. tetani and 0.02 nM for C. perfringens. The SPR biosensor had the same detection rate as the traditional culture method (P Conclusions Our method can rapidly and accurately identify the mixed aerobic-anaerobic infection, providing a reliable alternative to bacterial culture for rapid bacteria detection.

  1. Thallium dispersal and contamination in surface sediments from South China and its source identification.

    Science.gov (United States)

    Liu, Juan; Wang, Jin; Chen, Yongheng; Shen, Chuan-Chou; Jiang, Xiuyang; Xie, Xiaofan; Chen, Diyun; Lippold, Holger; Wang, Chunlin

    2016-06-01

    Thallium (Tl) is a non-essential element in humans and it is considered to be highly toxic. In this study, the contents, sources, and dispersal of Tl were investigated in surface sediments from a riverine system (the western Pearl River Basin, China), whose catchment has been contaminated by mining and roasting of Tl-bearing pyrite ores. The isotopic composition of Pb and total contents of Tl and other relevant metals (Pb, Zn, Cd, Co, and Ni) were measured in the pyrite ores, mining and roasting wastes, and the river sediments. Widespread contamination of Tl was observed in the sediments across the river, with the highest concentration of Tl (17.3 mg/kg) measured 4 km downstream from the pyrite industrial site. Application of a modified Institute for Reference Materials and Measurement (IRMM) sequential extraction scheme in representative sediments unveiled that 60-90% of Tl and Pb were present in the residual fraction of the sediments. The sediments contained generally lower (206)Pb/(207)Pb and higher (208)Pb/(206)Pb ratios compared with the natural Pb isotope signature (1.2008 and 2.0766 for (206)Pb/(207)Pb and (208)Pb/(206)Pb, respectively). These results suggested that a significant fraction of non-indigenous Pb could be attributed to the mining and roasting activities of pyrite ores, with low (206)Pb/(207)Pb (1.1539) and high (208)Pb/(206)Pb (2.1263). Results also showed that approximately 6-88% of Tl contamination in the sediments originated from the pyrite mining and roasting activities. This study highlights that Pb isotopic compositions could be used for quantitatively fingerprinting the sources of Tl contamination in sediments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Noninvasive identification of bladder cancer with sub-surface backscattered light

    Energy Technology Data Exchange (ETDEWEB)

    Bigio, I.J.; Mourant, J.R.; Boyer, J.; Johnson, T.; Shimada, T. [Los Alamos National Lab., NM (United States); Conn, R.L. [Lovelace Medical Center, Albuquerque, NM (United States). Dept. of Urology

    1994-02-01

    A non-invasive diagnostic tool that could identify malignancy in situ and in real time would have a major impact on the detection and treatment of cancer. We have developed and are testing early prototypes of an optical biopsy system (OBS) for detection of cancer and other tissue pathologies. The OBS invokes a unique approach to optical diagnosis of tissue pathologies based on the elastic scattering properties, over a wide range of wavelengths, of the microscopic structure of the tissue. Absorption bands in the tissue also add useful complexity to the spectral data collected. The use of elastic scattering as the key to optical tissue diagnostics in the OBS is based on the fact that many tissue pathologies, including a majority of cancer forms, manifest significant architectural changes at the cellular and sub-cellular level. Since the cellular components that cause elastic scattering have dimensions typically on the order of visible to near-IR wavelengths, the elastic (Mie) scattering properties will be strongly wavelength dependent. Thus, morphology and size changes can be expected to cause significant changes in an optical signature that is derived from the wavelength-dependence of elastic scattering as well as absorption. The data acquisition and storage/display time with the OBS instrument is {approximately}1 second. Thus, in addition to the reduced invasiveness of this technique compared with current state-of-the-art methods (surgical biopsy and pathology analysis), the OBS offers the possibility of impressively faster diagnostic assessment. The OBS employs a small fiber-optic probe that is amenable to use with any endoscope, catheter or hypodermic, or to direct surface examination (e.g., as in skin cancer or cervical cancer). We report here specifically on its potential application in the detection of bladder cancer.

  3. Experiences With an Optimal Estimation Algorithm for Surface and Atmospheric Parameter Retrieval From Passive Microwave Data in the Arctic

    DEFF Research Database (Denmark)

    Scarlat, Raul Cristian; Heygster, Georg; Pedersen, Leif Toudal

    2017-01-01

    the brightness temperatures observed by a passive microwave radiometer. The retrieval method inverts the forward model and produces ensembles of the seven parameters, wind speed, integrated water vapor, liquid water path, sea and ice temperature, sea ice concentration and multiyear ice fraction. The method...... compared with the Arctic Systems Reanalysis model data as well as columnar water vapor retrieved from satellite microwave sounders and the Remote Sensing Systems AMSR-E ocean retrieval product in order to determine the feasibility of using the same setup over pure surface with 100% and 0% sea ice cover......, respectively. Sea ice concentration retrieval shows good skill for pure surface cases. Ice types retrieval is in good agreement with scatterometer backscatter data. Deficiencies have been identified in using the forward model over sea ice for retrieving atmospheric parameters, that are connected...

  4. An Algorithm for the Retrieval of 30-m Snow-Free Albedo from Landsat Surface Reflectance and MODIS BRDF

    Science.gov (United States)

    Shuai, Yanmin; Masek, Jeffrey G.; Gao, Feng; Schaaf, Crystal B.

    2011-01-01

    We present a new methodology to generate 30-m resolution land surface albedo using Landsat surface reflectance and anisotropy information from concurrent MODIS 500-m observations. Albedo information at fine spatial resolution is particularly useful for quantifying climate impacts associated with land use change and ecosystem disturbance. The derived white-sky and black-sky spectral albedos maybe used to estimate actual spectral albedos by taking into account the proportion of direct and diffuse solar radiation arriving at the ground. A further spectral-to-broadband conversion based on extensive radiative transfer simulations is applied to produce the broadband albedos at visible, near infrared, and shortwave regimes. The accuracy of this approach has been evaluated using 270 Landsat scenes covering six field stations supported by the SURFace RADiation Budget Network (SURFRAD) and Atmospheric Radiation Measurement Southern Great Plains (ARM/SGP) network. Comparison with field measurements shows that Landsat 30-m snow-free shortwave albedos from all seasons generally achieve an absolute accuracy of +/-0.02 - 0.05 for these validation sites during available clear days in 2003-2005,with a root mean square error less than 0.03 and a bias less than 0.02. This level of accuracy has been regarded as sufficient for driving global and regional climate models. The Landsat-based retrievals have also been compared to the operational 16-day MODIS albedo produced every 8-days from MODIS on Terra and Aqua (MCD43A). The Landsat albedo provides more detailed landscape texture, and achieves better agreement (correlation and dynamic range) with in-situ data at the validation stations, particularly when the stations include a heterogeneous mix of surface covers.

  5. [Using cancer case identification algorithms in medico-administrative databases: Literature review and first results from the REDSIAM Tumors group based on breast, colon, and lung cancer].

    Science.gov (United States)

    Bousquet, P-J; Caillet, P; Coeuret-Pellicer, M; Goulard, H; Kudjawu, Y C; Le Bihan, C; Lecuyer, A I; Séguret, F

    2017-10-01

    The development and use of healthcare databases accentuates the need for dedicated tools, including validated selection algorithms of cancer diseased patients. As part of the development of the French National Health Insurance System data network REDSIAM, the tumor taskforce established an inventory of national and internal published algorithms in the field of cancer. This work aims to facilitate the choice of a best-suited algorithm. A non-systematic literature search was conducted for various cancers. Results are presented for lung, breast, colon, and rectum. Medline, Scopus, the French Database in Public Health, Google Scholar, and the summaries of the main French journals in oncology and public health were searched for publications until August 2016. An extraction grid adapted to oncology was constructed and used for the extraction process. A total of 18 publications were selected for lung cancer, 18 for breast cancer, and 12 for colorectal cancer. Validation studies of algorithms are scarce. When information is available, the performance and choice of an algorithm are dependent on the context, purpose, and location of the planned study. Accounting for cancer disease specificity, the proposed extraction chart is more detailed than the generic chart developed for other REDSIAM taskforces, but remains easily usable in practice. This study illustrates the complexity of cancer detection through sole reliance on healthcare databases and the lack of validated algorithms specifically designed for this purpose. Studies that standardize and facilitate validation of these algorithms should be developed and promoted. Copyright © 2017. Published by Elsevier Masson SAS.

  6. An algorithm for calculating unsteady flow with free surface; Ein Verfahren zur Berechnung instationaerer Stroemungen mit freier Oberflaeche

    Energy Technology Data Exchange (ETDEWEB)

    Janetzky, B.

    2001-07-01

    A numerical model for the transient, free surface flow is implemented in a Finite-Element program for the unsteady calculation of incompressible flow with free surface. The program is used to calculate the flow in different components of a hydraulic turbine, the Pelton turbine. The movement of the fluid with free surface is described mathematically by introducing a partial differential equation for the volume fraction. This equation is simply a transport equation for f, i.e. the volume fraction is advected with the flow in time. The equations is solved numerically. (orig.) [German] Es wird ein Verfahren zur Modellierung von veraenderlichen, freien Oberflaechen vorgestellt und in einem Finite-Elemente-Programm zur numerischen Berechnung von instationaeren, inkompressiblen Stroemungen implementiert. Die veraenderliche, freie Oberflaeche wird mit einem Volume-Of-Fluid Ansatz erfasst. Zur Approximierung der freien Oberflaeche werden stueckweise konstante oder gestufte Verlaeufe im Element angesetzt. Es werden die Eigenschaften des Verfahrens an ausgewaehlten Beispielen mit freier Oberflaeche untersucht. Das erweiterte Programm wird auf instationaere Stroemungen mit freier Oberflaeche in einer hydraulischen Maschine, der Peltonturbine, angewandt. (orig.)

  7. Development and validation of case-finding algorithms for the identification of patients with anti-neutrophil cytoplasmic antibody-associated vasculitis in large healthcare administrative databases.

    Science.gov (United States)

    Sreih, Antoine G; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A

    2016-12-01

    The aim of this study was to develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener's, GPA), microscopic polyangiitis (MPA), and eosinophilic GPA (Churg-Strauss, EGPA). Two hundred fifty patients per disease were randomly